<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: BULKLOAD Libname option for Hadoop? in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680058#M205446</link>
    <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Well, I'll be danged.&amp;nbsp; Not only did INSERTBUFF make a difference, it totally changed SAS's behavior.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First the time stamps.&amp;nbsp; This was for a run of 250 records (infinitesimal by normal SAS standards).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Without INSERTBUFF:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;NOTE: DATA statement used (Total process time):
      real time           1:25:03.04
      cpu time            0:00:01.32
&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;NOTE: DATA statement used (Total process time):
      real time           0:00:21.38
      cpu time            0:00:00.15
&lt;/PRE&gt;
&lt;P&gt;Basically, it took an hour and a half without INSERTBUFF versus only 21&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;seconds&lt;/EM&gt;&lt;/STRONG&gt; with INSERTBUFF, a nearly unbelievable difference!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SASTRACE tells the full story.&amp;nbsp; Take a close look at the following.&lt;/P&gt;
&lt;P&gt;Without INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;ODBC_5: Executed: on connection 2
CREATE TABLE Informatics_Prd.HIC_NUMBERS_BULK_JB (hic_num VARCHAR(20))
 
ODBC_6: Prepared: on connection 2
INSERT INTO Informatics_Prd.HIC_NUMBERS_BULK_JB (hic_num)  VALUES ( ? )
 
ODBC_7: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_8: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_9: Executed: on connection 2
Prepared statement ODBC_6

...ODBC messages 10 - 253 redacted for brevity...

ODBC_254: Executed: on connection 2
Prepared statement ODBC_6
 
 ODBC_255: Executed: on connection 2
Prepared statement ODBC_6
 
 ODBC_256: Executed: on connection 2
Prepared statement ODBC_6
 
NOTE: There were 250 observations read from the data set WORK.MEMBERS_JB.
NOTE: The data set OPSI_RSC.HIC_NUMBERS_BULK_JB has 250 observations and 1 variables.
 
Summary Statistics for ODBC are:
Total SQL execution seconds were:                 5100.644141
Total SQL prepare seconds were:                     0.489638
Total seconds used by the ODBC ACCESS engine were   5101.169617
&lt;/PRE&gt;
&lt;P&gt;With INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;ODBC_3: Executed: on connection 2
CREATE TABLE Informatics_Prd.HIC_NUMBERS_BULK_BUFF_JB (hic_num VARCHAR(20))
 
ODBC_4: Prepared: on connection 2
INSERT INTO Informatics_Prd.HIC_NUMBERS_BULK_BUFF_JB (hic_num)  VALUES ( ? )
 
NOTE: There were 250 observations read from the data set WORK.MEMBERS_JB.
 
ODBC_5: Executed: on connection 2
Prepared statement ODBC_4
 
NOTE: The data set OPSI_RSC.HIC_NUMBERS_BULK_BUFF_JB has 250 observations and 1 variables.
 
Summary Statistics for ODBC are:
Total SQL execution seconds were:                  20.292003
Total SQL prepare seconds were:                     0.507490
Total seconds used by the ODBC ACCESS engine were    21.037232&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do you see the difference?&amp;nbsp; Without INSERTBUFF, 250 separate communications are made via ODBC with Hadoop, and 250 separate Map/Reduce jobs are spawned in Hadoop.&amp;nbsp; It takes a very long time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With INSERTBUFF, message "ODBC_5" states that the statement prepared in ODBC_4 was executed, and, from the rest of the trace, we can see that it was executed just&amp;nbsp;&lt;EM&gt;one&lt;/EM&gt;&amp;nbsp;time.&amp;nbsp; Only &lt;EM&gt;&lt;STRONG&gt;one&lt;/STRONG&gt;&lt;/EM&gt; Map/Reduce job was spawned in Hadoop, and it finished in seconds.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hallelujah!&amp;nbsp; This is quite good.&amp;nbsp; Now, I need to try INSERTBUFF with larger sample sizes.&amp;nbsp; As I recall the max for INSERTBUFF is 32767.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And,&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;, I apologize that I ever doubted you.&amp;nbsp; I thought INSERTBUFF might make a difference, but I didn't think that INSERTBUFF would fundamentally change the way that SAS interacts with Hadoop.&amp;nbsp;&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, thank you for your encouragement.&amp;nbsp; I sometimes need a bit of a nudge to get over my preconceived notions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 28 Aug 2020 18:45:55 GMT</pubDate>
    <dc:creator>jimbarbour</dc:creator>
    <dc:date>2020-08-28T18:45:55Z</dc:date>
    <item>
      <title>BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679049#M205043</link>
      <description>&lt;P&gt;I have some data in a SAS data set that I want to upload to Hadoop (the MapR implementation not Couldera/Hortonworks).&amp;nbsp; I am connecting to Hadoop using the &lt;STRONG&gt;ODBC&lt;/STRONG&gt; engine.&amp;nbsp; I don't have the Hadoop engine available to me; just ODBC.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As is, INSERT after INSERT after INSERT is being performed, one row at a time.&amp;nbsp; While that might work for a few hundred or even a few thousand rows, I have about 30,000,000 rows (just one column, an ID number) that I want to upload.&amp;nbsp; Doing an INSERT one row at a time will probably complete about the same time that our sun flames out and becomes a cold, dark cinder.&amp;nbsp; Call me impatient.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I want to use the BULKLOAD option. I code BULKLOAD=YES on my Libname, and the Libname is assigned, no problem.&amp;nbsp; However, when I actually run my code to load the data to Hadoop from SAS, I get an "ERROR: Unable to initialize bulk loader" error message in my SAS log.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I don't suppose anyone can offer me a bit of a hint here, can they?&amp;nbsp; I'd really like to use the BULKLOAD option, but I can't seem to find any information on how to trace down why the bulk loader won't initialize.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 25 Aug 2020 04:00:59 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679049#M205043</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-25T04:00:59Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679058#M205047</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/37107"&gt;@jimbarbour&lt;/a&gt;&amp;nbsp;- I don't know about Hadoop and bulk loading but have you tried the INSERTBUFF = option? This definitely speeds up loading traditional RDBMSs in my experience. Try values like 1000, 10000, 20000 etc and see if it helps.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Aug 2020 04:59:10 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679058#M205047</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2020-08-25T04:59:10Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679060#M205048</link>
      <description>Interesting.  I will try that.  I'm not sure what effect one way or the other it will have.  I'm uploading a series of 20 character ID numbers (several million of them) to Hadoop, and SAS is feeding them to Hadoop one at a time in the form of an SQL Insert.  I wonder if buffering will help given that each ID is a separate SQL query, but it certainly can't hurt to try.&lt;BR /&gt;&lt;BR /&gt;Thank you,&lt;BR /&gt;&lt;BR /&gt;Jim</description>
      <pubDate>Tue, 25 Aug 2020 05:09:19 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679060#M205048</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-25T05:09:19Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679234#M205098</link>
      <description>I created a support track with SAS.  It may be that the ODBC engine just doesn't support bulk loading for Hadoop.&lt;BR /&gt;&lt;BR /&gt;Jim</description>
      <pubDate>Tue, 25 Aug 2020 18:09:58 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679234#M205098</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-25T18:09:58Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679334#M205125</link>
      <description>&lt;P&gt;I spoke with SAS tech support, and, as I feared, the ODBC engine does not support BULKLOAD.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One possible workaround may be to use Proc Hadoop to copy the contents of a SAS table to a file stored on HDFS and then run a Hive query such that the file can be read as a Hive table.&amp;nbsp; If I get that working, I'll report back.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;</description>
      <pubDate>Wed, 26 Aug 2020 04:19:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679334#M205125</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-26T04:19:46Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679375#M205133</link>
      <description>&lt;P&gt;&lt;EM&gt;&amp;gt;&amp;nbsp;I get an "ERROR: Unable to initialize bulk loader" error message in my SAS log.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;This type of error message probably calls for tech Support to have a look.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;gt;&lt;/EM&gt;&lt;SPAN&gt;&lt;EM&gt;As is, INSERT after INSERT after INSERT is being performed,&lt;/EM&gt; &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Do you use proc append to add the data?&lt;/P&gt;</description>
      <pubDate>Wed, 26 Aug 2020 08:15:26 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679375#M205133</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-26T08:15:26Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679623#M205232</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Yes, you're quite right; this calls for SAS tech support.&amp;nbsp; I did open a track with them, and they replied that the BULKLOAD option is not supported when using the ODBC engine.&amp;nbsp; One must license the Hadoop engine in order to use the BULKLOAD option.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I was using a very simple DATA step to transfer the data from SAS to Hadoop.&amp;nbsp; To wit:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;DATA	OPSI_RSC.Hic_Numbers_Bulk_JB;
	SET	WORK.MEMBERS_JB	(KEEP=hic_num	OBS=&amp;amp;Obs)
	;
RUN;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;The Libname for OPSI_RSC has the BULKLOAD=YES option coded on it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Aug 2020 00:38:06 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679623#M205232</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-27T00:38:06Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679644#M205242</link>
      <description>&lt;P&gt;It seems to me that the error message should be different ("unsupported option") and be delivered when the library is defined.&lt;/P&gt;
&lt;P&gt;As it is, it is extremely misleading, to the point of a defect imho.&lt;/P&gt;
&lt;P&gt;proc append might be faster as it does not process anything (no PDV loading etc) and does not need to read one record at a time.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Aug 2020 05:18:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679644#M205242</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-27T05:18:46Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679750#M205286</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;It seems to me that the error message should be different ("unsupported option") and be delivered when the library is defined.&lt;/P&gt;
&lt;P&gt;As it is, it is extremely misleading, to the point of a defect imho.&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, agreed.&amp;nbsp; Such an error message should be a) clear and b) issued at the time of allocation.&amp;nbsp; SAS seems a bit lax about such things these days.&amp;nbsp; I recently got a "read access violation" error that advised me to contact SAS Tech Support.&amp;nbsp; They basically just shrugged it off.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my opinion, good software does not crash or get memory/access violations.&amp;nbsp; Well written software handles internal errors just a bit more gracefully.&amp;nbsp; Based on the last three or so years of experience, I am not at all impressed with how SAS is handling Hadoop.&amp;nbsp; I hope SAS makes it in the brave new world of the Cloud and "Big Data."&amp;nbsp; They seem a bit, well, sloppy thus far.&lt;/P&gt;
&lt;PRE&gt;ERROR:  An exception has been encountered.
Please contact technical support and provide them with the following traceback information:
 
The SAS task name is [SQL]
ERROR:  Read Access Violation SQL
Exception occurred at (802742F8)
Task Traceback
Address   Frame     (DBGHELP API Version 4.0 rev 5)
00000001802742F8  0000000007F2DFE0  MapRHiveODBC64:ConfigDSN+0x258868
... [redacted for brevity]&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;proc append might be faster as it does not process anything (no PDV loading etc) and does not need to read one record at a time.&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;I'll try it, but really this isn't an issue of how SAS does things (as it would most definitely be were I to be using SAS data sets).&amp;nbsp; What really matters here is what SAS passes to Hadoop/Hive.&amp;nbsp; I suspect it will be a series of INSERT's, but it's worth a try.&amp;nbsp; I think the better work-around will be to upload a file to HDFS and then "define" it as a table via Hive.&amp;nbsp; I just need to now figure out the specifics of how to do that, but, in theory, that would be the best way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, do you have occasion to work with Hadoop?&amp;nbsp; We've been experimenting with an alternative to Hive, something called Presto.&amp;nbsp; As Hive sits on top of Hadoop, so also does Presto, but Presto is &lt;EM&gt;several&lt;/EM&gt; orders of magnitude faster based on the testing we've done.&amp;nbsp; I could send you my write up of my tests if you were interested.&amp;nbsp; I had not heretofore heard of Presto and was not even aware that there was an alternative to Hive.&amp;nbsp; I am quite impressed with Presto so far.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Aug 2020 15:02:33 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679750#M205286</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-27T15:02:33Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679803#M205314</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;No luck.&amp;nbsp; As I suspected (see previous reply), SAS is translating Proc Append into a series of INSERT statements (see below screen print from the Hadoop Resource Manager) when it communicates with Hive.&amp;nbsp; In effect, there is no difference between a Data step and a Proc Append in terms of how SAS interacts with Hive.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ODBC_Append_Inserts_2020-08-27_09-42-51.jpg" style="width: 661px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/48724i132EF09EF08C4871/image-size/large?v=v2&amp;amp;px=999" role="button" title="ODBC_Append_Inserts_2020-08-27_09-42-51.jpg" alt="ODBC_Append_Inserts_2020-08-27_09-42-51.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Aug 2020 16:49:12 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679803#M205314</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-27T16:49:12Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679899#M205366</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/37107"&gt;@jimbarbour&lt;/a&gt;&amp;nbsp; - Did you try INSERTBUFF? It is definitely an ODBC allowable option.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Aug 2020 20:25:39 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679899#M205366</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2020-08-27T20:25:39Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679930#M205377</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The way that SAS works is that it breaks each record in the SAS data set to be uploaded into a separate Map/Reduce job.&amp;nbsp; That's one Hadoop Map/Reduce job per row.&amp;nbsp; Say I have 10,000,000 ID numbers I want to upload, one per row.&amp;nbsp; That means that there will be 10,000,000 separate jobs.&amp;nbsp; If each job runs in 15 - 20 seconds each (which is about what they've been running), it would take about 5 years to do all the inserts.&amp;nbsp; Now, let's say we can use INSERTBUFF and reduce it to 5 seconds per insert.&amp;nbsp; The inserts would take about 1.5 years.&amp;nbsp; Even if we reduced it to 1 second per insert, the inserts would take more than 3 months.&amp;nbsp;&amp;nbsp;I will try INSERTBUFF, but I suspect that while it may help, it won't make the upload run in a practical amount of time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 02:58:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679930#M205377</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-28T02:58:41Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679942#M205385</link>
      <description>&lt;P&gt;Yes I used to use SAS and Hadoop.&lt;/P&gt;
&lt;P&gt;We ended up using Hive to copy data to Hadoop (using&lt;FONT face="courier new,courier"&gt; proc hadoop&lt;/FONT&gt;), and Impala to query the data, as these were fastest for their respective tasks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;gt; They basically just shrugged it off.&amp;nbsp;&lt;BR /&gt;&lt;/EM&gt;Shame.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Not surprised that proc append issues insert statements, but we had to try.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;&amp;nbsp;What does using option INSERTBUFF actually do? Does it not issue INSERT statements? How can it insert several rows at once?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/37107"&gt;@jimbarbour&lt;/a&gt;&amp;nbsp;Option INSERTBUFF seems to require option&amp;nbsp;&lt;SPAN&gt;DBCOMMIT&amp;nbsp;to be set too. Have you done this?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 04:52:11 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679942#M205385</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-28T04:52:11Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679948#M205391</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the SAS documentation, the following databases support INSERTBUFF:&lt;/P&gt;
&lt;TABLE class="xisDoc-summary"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TH class="xisDoc-dataSource"&gt;Data source:&lt;/TH&gt;
&lt;TD class="xisDoc-summaryValue"&gt;Amazon Redshift, Aster, DB2 under UNIX and PC Hosts, Google BigQuery, Greenplum, HAWQ, Impala, JDBC, Microsoft SQL Server, MySQL, Netezza, ODBC, OLE DB, Oracle, PostgreSQL, SAP HANA, SAP IQ, Snowflake, Vertica, Yellowbrick&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I see Impala which we don't have, but I don't see Hive.&amp;nbsp; I suspect that INSERTBUFF may not have any effect on a Hive query.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;P.S.&amp;nbsp; Perhaps this is a consequence of our not licensing the Hadoop specific engine (we only have ODBC), but when I tried to run Proc Hadoop, I got a message that the procedure was not found.&amp;nbsp; I think I'm just out of luck in terms of SAS.&amp;nbsp; If I'm going to upload lists of variables like ID numbers for joins, it looks like I'm going to have to do it outside of SAS.&lt;/P&gt;
&lt;PRE&gt;ERROR: Procedure HADOOP not found.&lt;/PRE&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 06:04:29 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679948#M205391</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-28T06:04:29Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679966#M205406</link>
      <description>&lt;P&gt;&amp;gt;&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;Perhaps this is a consequence of our not licensing the Hadoop specific engine&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Yes it is.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I think your best bet is to 1) transfer flat files to HDFS, and then 2) import the files.&lt;/P&gt;
&lt;P&gt;Probably via 1) data step+file or fcopy(), and then 2) proc sql+execute by&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 07:13:50 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679966#M205406</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-28T07:13:50Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679970#M205408</link>
      <description>&lt;P&gt;&lt;EM&gt;&amp;gt;&amp;nbsp;I see Impala which we don't have, but I don't see Hive.&amp;nbsp; I suspect that INSERTBUFF may not have any effect on a Hive query.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;ODBC is there, which you are using, so we could have expected an effect.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 07:37:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/679970#M205408</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-28T07:37:43Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680058#M205446</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Well, I'll be danged.&amp;nbsp; Not only did INSERTBUFF make a difference, it totally changed SAS's behavior.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First the time stamps.&amp;nbsp; This was for a run of 250 records (infinitesimal by normal SAS standards).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Without INSERTBUFF:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;NOTE: DATA statement used (Total process time):
      real time           1:25:03.04
      cpu time            0:00:01.32
&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;NOTE: DATA statement used (Total process time):
      real time           0:00:21.38
      cpu time            0:00:00.15
&lt;/PRE&gt;
&lt;P&gt;Basically, it took an hour and a half without INSERTBUFF versus only 21&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;seconds&lt;/EM&gt;&lt;/STRONG&gt; with INSERTBUFF, a nearly unbelievable difference!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SASTRACE tells the full story.&amp;nbsp; Take a close look at the following.&lt;/P&gt;
&lt;P&gt;Without INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;ODBC_5: Executed: on connection 2
CREATE TABLE Informatics_Prd.HIC_NUMBERS_BULK_JB (hic_num VARCHAR(20))
 
ODBC_6: Prepared: on connection 2
INSERT INTO Informatics_Prd.HIC_NUMBERS_BULK_JB (hic_num)  VALUES ( ? )
 
ODBC_7: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_8: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_9: Executed: on connection 2
Prepared statement ODBC_6

...ODBC messages 10 - 253 redacted for brevity...

ODBC_254: Executed: on connection 2
Prepared statement ODBC_6
 
 ODBC_255: Executed: on connection 2
Prepared statement ODBC_6
 
 ODBC_256: Executed: on connection 2
Prepared statement ODBC_6
 
NOTE: There were 250 observations read from the data set WORK.MEMBERS_JB.
NOTE: The data set OPSI_RSC.HIC_NUMBERS_BULK_JB has 250 observations and 1 variables.
 
Summary Statistics for ODBC are:
Total SQL execution seconds were:                 5100.644141
Total SQL prepare seconds were:                     0.489638
Total seconds used by the ODBC ACCESS engine were   5101.169617
&lt;/PRE&gt;
&lt;P&gt;With INSERTBUFF:&lt;/P&gt;
&lt;PRE&gt;ODBC_3: Executed: on connection 2
CREATE TABLE Informatics_Prd.HIC_NUMBERS_BULK_BUFF_JB (hic_num VARCHAR(20))
 
ODBC_4: Prepared: on connection 2
INSERT INTO Informatics_Prd.HIC_NUMBERS_BULK_BUFF_JB (hic_num)  VALUES ( ? )
 
NOTE: There were 250 observations read from the data set WORK.MEMBERS_JB.
 
ODBC_5: Executed: on connection 2
Prepared statement ODBC_4
 
NOTE: The data set OPSI_RSC.HIC_NUMBERS_BULK_BUFF_JB has 250 observations and 1 variables.
 
Summary Statistics for ODBC are:
Total SQL execution seconds were:                  20.292003
Total SQL prepare seconds were:                     0.507490
Total seconds used by the ODBC ACCESS engine were    21.037232&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do you see the difference?&amp;nbsp; Without INSERTBUFF, 250 separate communications are made via ODBC with Hadoop, and 250 separate Map/Reduce jobs are spawned in Hadoop.&amp;nbsp; It takes a very long time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With INSERTBUFF, message "ODBC_5" states that the statement prepared in ODBC_4 was executed, and, from the rest of the trace, we can see that it was executed just&amp;nbsp;&lt;EM&gt;one&lt;/EM&gt;&amp;nbsp;time.&amp;nbsp; Only &lt;EM&gt;&lt;STRONG&gt;one&lt;/STRONG&gt;&lt;/EM&gt; Map/Reduce job was spawned in Hadoop, and it finished in seconds.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hallelujah!&amp;nbsp; This is quite good.&amp;nbsp; Now, I need to try INSERTBUFF with larger sample sizes.&amp;nbsp; As I recall the max for INSERTBUFF is 32767.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And,&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;, I apologize that I ever doubted you.&amp;nbsp; I thought INSERTBUFF might make a difference, but I didn't think that INSERTBUFF would fundamentally change the way that SAS interacts with Hadoop.&amp;nbsp;&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, thank you for your encouragement.&amp;nbsp; I sometimes need a bit of a nudge to get over my preconceived notions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 18:45:55 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680058#M205446</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-28T18:45:55Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680068#M205448</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;(and hopefully of interest to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;&amp;nbsp;too)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You brought up the point of DBCOMMIT.&amp;nbsp; This is a very valid point.&amp;nbsp; Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've now run a series of tests.&amp;nbsp; First, I parameterized the buffer and DBcommit settings on my Libname and put in a log statement to keep track of them for testing.&amp;nbsp; Note the DBcommit parameter below:&lt;/P&gt;
&lt;PRE&gt;       +-----------------------------------------------------+
NOTE:  | Lib=OPSI_RSC allocated with buffers (Buffs=YES)     |
       | ReadBuff=10,000                                     |
       | InsertBuff=10,000                                   |
       | DBcommit=100,000                                    |
       +-----------------------------------------------------+&lt;/PRE&gt;
&lt;P&gt;Next, I set my Obs parameter to 100,000 and run.&amp;nbsp; Below is the SASTRACE.&lt;/P&gt;
&lt;PRE&gt;ODBC_7: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_8: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_9: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_10: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_11: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_12: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_13: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_14: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_15: Executed: on connection 2
Prepared statement ODBC_6
 
ODBC_16: Executed: on connection 2
Prepared statement ODBC_6
&lt;BR /&gt;NOTE: There were 100000 observations read from the data set WORK.MEMBERS_JB.
NOTE: The data set OPSI_RSC.HIC_NUMBERS_BULK_BUFF_JB has 100000 observations and 1 variables.
 
Summary Statistics for ODBC are:
Total SQL execution seconds were:                 212.398212
Total SQL prepare seconds were:                     0.456196
Total seconds used by the ODBC ACCESS engine were   213.266036
&lt;/PRE&gt;
&lt;P&gt;Note in the above SASTRACE that the "prepared statement" is executed&amp;nbsp;&lt;EM&gt;&lt;STRONG&gt;ten&lt;/STRONG&gt;&lt;/EM&gt; times.&amp;nbsp; If my DBCOMMIT is 100,000 and my Obs parameter is 100,000, shouldn't the prepared statement be executed only once?&amp;nbsp; Actually, no.&amp;nbsp; SAS will take the&amp;nbsp;&lt;EM&gt;lesser&lt;/EM&gt; of&amp;nbsp;the INSERTBUFF and the DBCOMMIT parameters.&amp;nbsp; Thus an INSERTBUFF of 10,000, and a DBCOMMIT of 100,000 results in a Map/Reduce jobs in Hadoop running with 10,000 rows each.&amp;nbsp; Note that if you do not code the DBCOMMIT parameter at all, DBCOMMIT will default to 1,000, even if INSERTBUFF = 32767.&amp;nbsp; In order to get the fastest upload speed, DBCOMMIT and INSERTBUFF must be used in conjunction with one another.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When using INSERTBUFF in conjunction with DBCOMMIT, uploading to Hadoop is roughly &lt;EM&gt;&lt;STRONG&gt;32,000 times faster&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;than when coding no buffer or commit parameters.&amp;nbsp; Not too shabby for a parameter change!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, for my putative 10,000,000 row upload, at 20 seconds per commit with INSERTBUFF and DBCOMMIT set to the max, 32767, it would take about 2 hours to run, which isn't half bad.&amp;nbsp; Using&amp;nbsp;INSERTBUFF in conjunction with DBCOMMIT is a very workable method of uploading to Hadoop.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;, you suggested INSERTBUFF, so perforce I award the solution to you, but to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, I thank you for your DBCOMMIT suggestion.&amp;nbsp; By changing DBCOMMIT from its default of 1,000 to its maximum of 32,767, I should be able to gain a roughly thirtyfold increase in speed over the default.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 19:04:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680068#M205448</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2020-08-28T19:04:43Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680112#M205469</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/37107"&gt;@jimbarbour&lt;/a&gt;&amp;nbsp;- Good to hear you have a solution. I did think of DBCOMMIT too but didn't put it in my response.&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 21:06:33 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680112#M205469</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2020-08-28T21:06:33Z</dc:date>
    </item>
    <item>
      <title>Re: BULKLOAD Libname option for Hadoop?</title>
      <link>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680152#M205483</link>
      <description>You set insertbuff to 10,000. Have you tried higher values?&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Aug 2020 23:48:51 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/BULKLOAD-Libname-option-for-Hadoop/m-p/680152#M205483</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2020-08-28T23:48:51Z</dc:date>
    </item>
  </channel>
</rss>

