BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
krishnaram101
Fluorite | Level 6

Hi ,

 

Is there any efficient way to have SAS pass through connection with Hadoop while importing huge datasets. The data has nearly 2M rows and 8K columns. Thanks !

1 ACCEPTED SOLUTION

Accepted Solutions
JBailey
Barite | Level 11

Hi @krishnaram101

 

I notice that port 8443 is listed. This means Knox is probably involved... This could impact how the data is brought back to SAS.

 

A couple of things to look at:

 

1) Make sure that you aren't being hit with the 32k String Thing (this is Hive string types being brought back as 32k strings). This significantly increased the amount of network traffic and makes writing the SAS data set to disk slower. The Github link includes an example of how to tell if this is happening. You can also look at the metadata for the created table to see if there are any 32k columns included in the table. The fact that you are returning 8K means that you may be returning a lot of data. If the 32k string thing were happening it would likely fill your disk up.

 

2) See how long this query takes to run. if the times are similar it means most of the time is being spent by Hadoop.

 

create table sastest.test as select * from connection to hadoop

   ( select count(*) from test limit 100000);

 

For more information about the 32k String Thing see the slides and exercises in this workshop - https://github.com/Jeff-Bailey/SGF2016_SAS3880_Insiders_Guide_Hadoop_HOW

 

Also check out this SAS Global Forum paper: Ten Tips to Unlock the Power of Hadoop with SAS®

 

If this doesn't help, you may want to consider opening a tech support track.

 

Best wishes,
Jeff

View solution in original post

4 REPLIES 4
LinusH
Tourmaline | Level 20

Please define "import".

From what format, to which format?

If the data shouldn't "touch" SAS during import, you could use EXECUTE blocks in PROC SQL to Hive, or PROC HADOOP for operations outside Hive.

Data never sleeps
krishnaram101
Fluorite | Level 6

I used the following code to access data from hadoop. It took me 6 hrs for get 100000 records and 8K columns which seems very slow. Without options, it took 8 hrs. Can you please check and give suggestions?

 

 

options SGIO=yes;
options bufno=2000 bufsize=48K;
Libname sastest 'E:\SASMA\SASUserData\User\krishnaramasamy\Hadoop data';

proc sql;
connect to hadoop (user=%LOWCASE(&SYSUSERID.) password="XXXXX"
server='YYYYYY' uri='jdbc:hive2://YYYYYYY.com:8443/default?hive.server2.transport.mode=http;hive.execution.engine=tez;hive.server2.thrift.http.path=gateway/hdpprod/hive;hive.execution.engine=tez' schema=ZZZZZ);
create table sastest.test as select * from connection to hadoop
(
select * from test
limit 100000
);
disconnect from hadoop ;
quit;

LinusH
Tourmaline | Level 20

This seems like a Hadoop/Hive admin issue, not SAS (since it's the query inside Hive that takes time - unless you have extremely smll bandwidth to the Hadoop cluster).

Data never sleeps
JBailey
Barite | Level 11

Hi @krishnaram101

 

I notice that port 8443 is listed. This means Knox is probably involved... This could impact how the data is brought back to SAS.

 

A couple of things to look at:

 

1) Make sure that you aren't being hit with the 32k String Thing (this is Hive string types being brought back as 32k strings). This significantly increased the amount of network traffic and makes writing the SAS data set to disk slower. The Github link includes an example of how to tell if this is happening. You can also look at the metadata for the created table to see if there are any 32k columns included in the table. The fact that you are returning 8K means that you may be returning a lot of data. If the 32k string thing were happening it would likely fill your disk up.

 

2) See how long this query takes to run. if the times are similar it means most of the time is being spent by Hadoop.

 

create table sastest.test as select * from connection to hadoop

   ( select count(*) from test limit 100000);

 

For more information about the 32k String Thing see the slides and exercises in this workshop - https://github.com/Jeff-Bailey/SGF2016_SAS3880_Insiders_Guide_Hadoop_HOW

 

Also check out this SAS Global Forum paper: Ten Tips to Unlock the Power of Hadoop with SAS®

 

If this doesn't help, you may want to consider opening a tech support track.

 

Best wishes,
Jeff

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

How to connect to databases in SAS Viya

Need to connect to databases in SAS Viya? SAS’ David Ghan shows you two methods – via SAS/ACCESS LIBNAME and SAS Data Connector SASLIBS – in this video.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 7505 views
  • 0 likes
  • 3 in conversation