SAS Data Integration Studio, DataFlux Data Management Studio, SAS/ACCESS, SAS Data Loader for Hadoop and others

SAS Pass through connection to hadoop

Accepted Solution Solved
Reply
Occasional Contributor
Posts: 6
Accepted Solution

SAS Pass through connection to hadoop

Hi ,

 

Is there any efficient way to have SAS pass through connection with Hadoop while importing huge datasets. The data has nearly 2M rows and 8K columns. Thanks !


Accepted Solutions
Solution
‎02-13-2018 11:15 AM
SAS Employee
Posts: 273

Re: SAS Pass through connection to hadoop

[ Edited ]
Posted in reply to krishnaram101

Hi @krishnaram101

 

I notice that port 8443 is listed. This means Knox is probably involved... This could impact how the data is brought back to SAS.

 

A couple of things to look at:

 

1) Make sure that you aren't being hit with the 32k String Thing (this is Hive string types being brought back as 32k strings). This significantly increased the amount of network traffic and makes writing the SAS data set to disk slower. The Github link includes an example of how to tell if this is happening. You can also look at the metadata for the created table to see if there are any 32k columns included in the table. The fact that you are returning 8K means that you may be returning a lot of data. If the 32k string thing were happening it would likely fill your disk up.

 

2) See how long this query takes to run. if the times are similar it means most of the time is being spent by Hadoop.

 

create table sastest.test as select * from connection to hadoop

   ( select count(*) from test limit 100000);

 

For more information about the 32k String Thing see the slides and exercises in this workshop - https://github.com/Jeff-Bailey/SGF2016_SAS3880_Insiders_Guide_Hadoop_HOW

 

Also check out this SAS Global Forum paper: Ten Tips to Unlock the Power of Hadoop with SAS®

 

If this doesn't help, you may want to consider opening a tech support track.

 

Best wishes,
Jeff

View solution in original post


All Replies
Super User
Posts: 5,852

Re: SAS Pass through connection to hadoop

Posted in reply to krishnaram101

Please define "import".

From what format, to which format?

If the data shouldn't "touch" SAS during import, you could use EXECUTE blocks in PROC SQL to Hive, or PROC HADOOP for operations outside Hive.

Data never sleeps
Occasional Contributor
Posts: 6

Re: SAS Pass through connection to hadoop

I used the following code to access data from hadoop. It took me 6 hrs for get 100000 records and 8K columns which seems very slow. Without options, it took 8 hrs. Can you please check and give suggestions?

 

 

options SGIO=yes;
options bufno=2000 bufsize=48K;
Libname sastest 'E:\SASMA\SASUserData\User\krishnaramasamy\Hadoop data';

proc sql;
connect to hadoop (user=%LOWCASE(&SYSUSERID.) password="XXXXX"
server='YYYYYY' uri='jdbc:hive2://YYYYYYY.com:8443/default?hive.server2.transport.mode=http;hive.execution.engine=tez;hive.server2.thrift.http.path=gateway/hdpprod/hive;hive.execution.engine=tez' schema=ZZZZZ);
create table sastest.test as select * from connection to hadoop
(
select * from test
limit 100000
);
disconnect from hadoop ;
quit;

Super User
Posts: 5,852

Re: SAS Pass through connection to hadoop

Posted in reply to krishnaram101

This seems like a Hadoop/Hive admin issue, not SAS (since it's the query inside Hive that takes time - unless you have extremely smll bandwidth to the Hadoop cluster).

Data never sleeps
Solution
‎02-13-2018 11:15 AM
SAS Employee
Posts: 273

Re: SAS Pass through connection to hadoop

[ Edited ]
Posted in reply to krishnaram101

Hi @krishnaram101

 

I notice that port 8443 is listed. This means Knox is probably involved... This could impact how the data is brought back to SAS.

 

A couple of things to look at:

 

1) Make sure that you aren't being hit with the 32k String Thing (this is Hive string types being brought back as 32k strings). This significantly increased the amount of network traffic and makes writing the SAS data set to disk slower. The Github link includes an example of how to tell if this is happening. You can also look at the metadata for the created table to see if there are any 32k columns included in the table. The fact that you are returning 8K means that you may be returning a lot of data. If the 32k string thing were happening it would likely fill your disk up.

 

2) See how long this query takes to run. if the times are similar it means most of the time is being spent by Hadoop.

 

create table sastest.test as select * from connection to hadoop

   ( select count(*) from test limit 100000);

 

For more information about the 32k String Thing see the slides and exercises in this workshop - https://github.com/Jeff-Bailey/SGF2016_SAS3880_Insiders_Guide_Hadoop_HOW

 

Also check out this SAS Global Forum paper: Ten Tips to Unlock the Power of Hadoop with SAS®

 

If this doesn't help, you may want to consider opening a tech support track.

 

Best wishes,
Jeff

☑ This topic is solved.

Need further help from the community? Please ask a new question.

Discussion stats
  • 4 replies
  • 1515 views
  • 0 likes
  • 3 in conversation