<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SAS-Hive (Hadoop) Efficiency with type String in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746914#M234381</link>
    <description>&lt;P&gt;Update to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've done a couple more runs of my SUBSTR and CAST to character method of limiting columns of type String.&amp;nbsp; See chart, below.&amp;nbsp; I think something big must have been running yesterday which skewed the results.&amp;nbsp; The following two runs are much closer to my 3 hour benchmark -- and each of them uses about 10% less disk space than a run without any optimization of column lengths.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jimbarbour_0-1623280230539.png" style="width: 999px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/60202i7573E74E87D14F3D/image-size/large?v=v2&amp;amp;px=999" role="button" title="jimbarbour_0-1623280230539.png" alt="jimbarbour_0-1623280230539.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 09 Jun 2021 23:12:30 GMT</pubDate>
    <dc:creator>jimbarbour</dc:creator>
    <dc:date>2021-06-09T23:12:30Z</dc:date>
    <item>
      <title>SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746648#M234244</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Background&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We have a lot of Hive tables that we query using SAS.&amp;nbsp; The problem is that many of the Hive columns are type String.&amp;nbsp; SAS wants to convert each and every column to $32767 which kills performance.&amp;nbsp; We can set a default definition at the ODBC driver level, in our case $255.&amp;nbsp; However, many of our columns are Y/N type flags or the like of one position -- which wastes 254 positions.&amp;nbsp; Some of our tables are close to a billion rows.&amp;nbsp; The majority of our columns are character, and we are using COMPRESS=CHAR.&amp;nbsp; Even with compression, we've found that space is being wasted.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Objective&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I want something more space efficient.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Current means&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I have a work around, and it does save space but it has performance problems.&amp;nbsp; More on that in a minute.&lt;/P&gt;
&lt;P&gt;My work-around:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1&lt;/STRONG&gt;.&amp;nbsp; Run a normal SELECT column1, column2, ... columnN.&amp;nbsp; All of the type String columns will default to $255.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2&lt;/STRONG&gt;.&amp;nbsp; Prior to the next production run, execute a utility program that reads the output from step 1 and finds the maximum length of each column and creates a dataset listing the column name, type, and length (modeled after the output one can get from exporting a SAS dataset layout using SAS Enterprise Guide).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;3&lt;/STRONG&gt;.&amp;nbsp; In the next production run, a macro is run to read the layout created in step 2.&amp;nbsp; The macro generates explicit pass through HQL that does the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Selects the column&lt;/LI&gt;
&lt;LI&gt;Substrings the column using the length from the layout created in step 2.&lt;/LI&gt;
&lt;LI&gt;Casts the column as VARCHAR again using the length&amp;nbsp;from the layout created in step 2.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For example, if the maximum length of a type String column in Hive named Gmid is 40, then "SELECT Gmid" becomes&amp;nbsp;"CAST(SUBSTR(Gmid, 1, 40) AS CHAR(40)) AS Gmid".&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As I say, the work-around does save space -- about 10 Gigabytes per run on the one job I've tried it on, and we have many jobs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The problem&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The problem is that performance suffers.&amp;nbsp; I was hoping that since Hadoop disperses processing across multiple nodes (Hadoop servers) there would not be a performance degradation due to the CASTing and SUBSTRing, but, alas, no.&amp;nbsp; My 3.5 hour job now runs in about 5 hours.&amp;nbsp; I do save disk space but at the cost of performance.&amp;nbsp; We already break the process into a dozen or so sub-processes run in parallel, one for each of our clients, so further sub-division and parallelization would be problematic from a maintenance and re-run/restart perspective.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The question&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Here's where you come in.&amp;nbsp; How can I accomplish the same thing more efficiently?&amp;nbsp; I have my layout "metadata" so I know the column names, types, and max lengths in advance.&amp;nbsp; Is there another (better) way?&amp;nbsp; Please brainstorm with me.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I thank you in advance,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 03:28:44 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746648#M234244</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-09T03:28:44Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746687#M234266</link>
      <description>&lt;P&gt;What if you set the length in the outer (SAS-side) SQL?&lt;/P&gt;
&lt;P&gt;Too much data will be transmitted, but space will be OK.&lt;/P&gt;
&lt;P&gt;I would also ask the question on Hadoop forums about why cast() and substr() kill the speed so much.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;gt;The majority of our columns are character, and we are using COMPRESS=CHAR.&amp;nbsp; Even with compression, we've found that space is being wasted.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;That should not be the case, trailing spaces should not take much room at all. SAS is slow dealing with long strings though, so trimming the length asap is a must.&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 08:46:20 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746687#M234266</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2021-06-09T08:46:20Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746698#M234271</link>
      <description>&lt;P&gt;As an alternative, have you looked at the option to use the SASFMT attribute in Hive?&lt;/P&gt;
&lt;P&gt;Don't know it it's more efficient than CAST, bet perhaps worth a try?&lt;/P&gt;
&lt;P&gt;Or the SAS option DBSASTYPE (i line with what&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;is suggesting).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For general performance I would recommend giving Impala a try. On the site I worked at it was clearly faster than Hive. (But is has no support for SASFMT though...)&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 09:53:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746698#M234271</guid>
      <dc:creator>LinusH</dc:creator>
      <dc:date>2021-06-09T09:53:43Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746893#M234369</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;What if you set the length in the outer (SAS-side) SQL?&lt;/P&gt;
&lt;P&gt;Too much data will be transmitted, but space will be OK.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, thanks for your response.&amp;nbsp; I'm trying your suggestion now and will run some tests overnight.&amp;nbsp; I will post the results.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I originally discounted this because I thought the slowness was caused by how much data had to be transmitted from Hadoop to the SAS server, but perhaps I was wrong.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;I would also ask the question on Hadoop forums about why cast() and substr() kill the speed so much.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Good thought.&amp;nbsp; If&amp;nbsp; I come up with anything, I will post here.&amp;nbsp; I did run an overnight test last night after our server use went down, and I got a run only about 10 minutes longer than a regular run, so the performance hit may not be so great as I originally thought.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;&lt;EM&gt;&amp;gt;The majority of our columns are character, and we are using COMPRESS=CHAR.&amp;nbsp; Even with compression, we've found that space is being wasted.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;That should not be the case, trailing spaces should not take much room at all. SAS is slow dealing with long strings though, so trimming the length asap is a must.&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;I'm finding about a 10% difference on average.&amp;nbsp; In other words, I use about 10% less space if I optimize the column lengths as compared to simply relying on COMPRESS=CHAR.&amp;nbsp; I pull in about 500 million rows in my monthly queries just for our Claims type (which is our largest category).&amp;nbsp; I save about 15 GB of disk space per run if I optimize the columns.&amp;nbsp; See chart, below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jimbarbour_0-1623270162045.png" style="width: 999px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/60201i33953E012161D1BD/image-size/large?v=v2&amp;amp;px=999" role="button" title="jimbarbour_0-1623270162045.png" alt="jimbarbour_0-1623270162045.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 20:23:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746893#M234369</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-09T20:23:14Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746896#M234372</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Unfortunately, in this shop, I am restricted and cannot place SASFMT's in the tables themselves.&amp;nbsp; &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm going to try the suggestion that&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;made of setting a length on the outer SQL (the SAS portion of the SQL) and see how the performance compares.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I&amp;nbsp;&lt;EM&gt;think&lt;/EM&gt; using DBSASTYPE and the suggestion that&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;made are doing roughly the same thing:&amp;nbsp; Converting the Hive type String to Character with a specified length on the SAS side.&amp;nbsp; However, if I can't get results with setting a LENGTH= as&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;suggests, maybe I'll try using DBSASTYPE next.&amp;nbsp; Thank you for that idea.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 20:28:50 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746896#M234372</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-09T20:28:50Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746914#M234381</link>
      <description>&lt;P&gt;Update to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've done a couple more runs of my SUBSTR and CAST to character method of limiting columns of type String.&amp;nbsp; See chart, below.&amp;nbsp; I think something big must have been running yesterday which skewed the results.&amp;nbsp; The following two runs are much closer to my 3 hour benchmark -- and each of them uses about 10% less disk space than a run without any optimization of column lengths.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jimbarbour_0-1623280230539.png" style="width: 999px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/60202i7573E74E87D14F3D/image-size/large?v=v2&amp;amp;px=999" role="button" title="jimbarbour_0-1623280230539.png" alt="jimbarbour_0-1623280230539.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Jun 2021 23:12:30 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/746914#M234381</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-09T23:12:30Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747219#M234489</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;What if you set the length in the outer (SAS-side) SQL?&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;, that turns out to do the trick.&amp;nbsp; It is both faster than non-optimized SQL and takes 10% less disk space. The fact that my optimizing column widths via a LENGTH parameter on the SAS side SQL actually saves space makes me wonder if SAS character compression isn't quite as efficient as I thought it was.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I also mention these results to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;&amp;nbsp;who is part of this thread and to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;, who I discussed some of these ideas on a separate thread about SQL Server, in case they might find them interesting.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Details are below for those who might be interested.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Steps&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Execute a normal Production run with non-optimized SQL.&lt;/LI&gt;
&lt;LI&gt;"Offline" (i.e. not as part of a Production run), run a macro that finds the maximum length of each column and creates a SAS dataset listing each column name, length, and type, one row per column&amp;nbsp; I also add the schema and the table name to each row.&lt;/LI&gt;
&lt;LI&gt;Add a macro to my Production processing that generates optimized SQL based on the SAS dataset created in step 2.&amp;nbsp; The optimization consists of a series of LENGTH parameters applied to the SAS side SQL.&amp;nbsp; The "inner" query -- the query that goes to the database -- does not change.&lt;/LI&gt;
&lt;LI&gt;Execute the optimized SQL.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Results are shown in the table below.&amp;nbsp; &lt;STRONG&gt;Synopsis:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;There is on average about a 10% savings on disk space.&lt;/LI&gt;
&lt;LI&gt;The critical path component of this particular Production process runs 30 to 40 minutes &lt;EM&gt;faster&lt;/EM&gt; for the same data using optimized SAS side SQL.&lt;/LI&gt;
&lt;LI&gt;My previous attempt at optimization, using Substr and Cast to Char in the inner query, did have the same space savings but typically made the process run 10 to 30 minutes&amp;nbsp;&lt;EM&gt;slower,&lt;/EM&gt; and, in one case, 2 hours slower.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;I am quite happy with these results.&amp;nbsp; I thank you for your kind assistance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Final note:&amp;nbsp; This method is not without potential peril:&amp;nbsp; If the data changes, truncation could result.&amp;nbsp; But of course the same peril exists in Data steps or any SQL with Length statements.&amp;nbsp; At least here this is a process that can be automated. In addition, I've added a 4 byte "safety margin" to each field.&amp;nbsp; Hopefully our DBA's will keep us up to date, but I may modify my macro that determines the maximum length of each column such that it issues a warning if the data is using all positions allocated -- which indicate that the data has changed given that I added a 4 byte safety margin.&amp;nbsp; I would need to run the macro periodically and investigate any such warnings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jimbarbour_0-1623359091261.png" style="width: 999px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/60243i4FAB679948276A12/image-size/large?v=v2&amp;amp;px=999" role="button" title="jimbarbour_0-1623359091261.png" alt="jimbarbour_0-1623359091261.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Jun 2021 21:21:06 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747219#M234489</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-10T21:21:06Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747253#M234504</link>
      <description>&lt;P&gt;Glad you have a solution that works.&lt;/P&gt;
&lt;P&gt;The V9 engine's compression routine is old. If you really want good compression store your data using the SPDE engine. Its binary compression typically yields the best compression ratio (80% to 90% is common).&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 02:39:08 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747253#M234504</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2021-06-11T02:39:08Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747261#M234508</link>
      <description>&lt;P&gt;&lt;EM&gt;&amp;gt;&amp;nbsp;&amp;nbsp;I would need to run the macro periodically and investigate any such warnings.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You could maybe try putting your query in a WITH clause, and use that to add a final row to your extract, something like:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;with Q1 as (select ... ),
select * from Q1 
union
select max(length(VAR1)),&amp;nbsp;max(length(VAR2)) from Q1 &lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;and then scan the last row for glitches before deleting it.&lt;/P&gt;
&lt;P&gt;I have no idea if Hive would be clever enough not&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;to read the table twice, like it should.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What's the bottleneck now? Network?&lt;/P&gt;
&lt;P&gt;Lastly, my experience is similar to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;&amp;nbsp;'s: Impala was faster than Hive&amp;nbsp;&lt;FONT face="inherit"&gt;for retrieving data when I was using &lt;/FONT&gt;Hadoop&lt;FONT face="inherit"&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I created a &lt;A href="https://communities.sas.com/t5/SASware-Ballot-Ideas/Support-WITH-clauses-in-SQL/idi-p/747260#M4523" target="_self"&gt;new ballot entry&lt;/A&gt; too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 03:44:13 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747261#M234508</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2021-06-11T03:44:13Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747275#M234516</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;If you really want good compression store your data using the SPDE engine.&amp;nbsp;&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Well, I've seriously thought about it.&amp;nbsp; The 11 processes I cite above run in parallel and produce 11 SAS data sets which are then combined via a Data step.&amp;nbsp; If I wrote out the results each of the queries using the SPDE engine, I believe I could have all 11 processes writing to the same SPDE table, which would obviate the need for the follow-on Data step.&amp;nbsp; The combined data set is then split by year into four separate data sets.&amp;nbsp; These by year files could be created in parallel via RSUBMIT, all simultaneously pulling from the SPDE table.&amp;nbsp; I believe fashioning the process flow in this manner could reduce the overall run time, but I have not (yet) experimented with it.&amp;nbsp; As it is, the Data step that combines the 11 results sets writes out the four by year data sets as it reads each row, so it's not completely clear ahead of time what the overall performance benefit might be.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To your point, though, the disk space savings might warrant the use of SPDE even if I can't drop overall run time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 05:38:58 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747275#M234516</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-11T05:38:58Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747279#M234518</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;You could maybe try putting your query in a WITH clause, and use that to add a final row to your extract, something like:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;with Q1 as (select ... ),
select * from Q1 
union
select max(length(VAR1)),&amp;nbsp;max(length(VAR2)) from Q1 &lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;and then scan the last row for glitches before deleting it.&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Good idea.&amp;nbsp; I'd be afraid of a performance hit, but I won't know unless I try it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;What's the bottleneck now? Network?&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;I don't have a definitive means to know, but I don't suspect the network.&amp;nbsp; Were the network the bottleneck, then my SUBSTR and CAST AS CHAR optimization should have improved performance, which it did not.&amp;nbsp; Indeed, performance degraded.&amp;nbsp; I suspect that SAS's writing to overly large character fields is the performance bottleneck, a bottleneck that was alleviated when LENGTH statements were added to the "outer" (SAS side) query.&amp;nbsp; With the addition of LENGTH statements, I save about 10% disk space, and run time is reduced by about 14%.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;Lastly, my experience is similar to&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13674"&gt;@LinusH&lt;/a&gt;&amp;nbsp;'s: Impala was faster than Hive&amp;nbsp;&lt;FONT face="inherit"&gt;for retrieving data when I was using &lt;/FONT&gt;Hadoop&lt;FONT face="inherit"&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;We aren't using Impala in this shop.&amp;nbsp; I'll have to look into it though and see if it is available to us.&amp;nbsp; I have not heretofore considered it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 06:12:33 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747279#M234518</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-11T06:12:33Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747308#M234530</link>
      <description>&lt;P&gt;&lt;EM&gt;&amp;gt; I believe I could have all 11 processes writing to the same SPDE table&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I don't think SPDE can do this on its own, like SPDS can. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;SPDE is just a (better) data set engine. It'd be a nice feature if different processes could write different partitions, but that's much beyond the scope of the current engine afaik.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 11:04:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747308#M234530</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2021-06-11T11:04:43Z</dc:date>
    </item>
    <item>
      <title>Re: SAS-Hive (Hadoop) Efficiency with type String</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747354#M234552</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/16961"&gt;@ChrisNZ&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;
&lt;P&gt;&lt;EM&gt;&amp;gt; I believe I could have all 11 processes writing to the same SPDE table&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I don't think SPDE can do this on its own, like SPDS can. &lt;/SPAN&gt;&lt;/P&gt;
&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Ah.&amp;nbsp; Well, thank you for that.&amp;nbsp; Better I not go down a dead end.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It sounds like I need to pull out your book again and read through that section.&amp;nbsp; It's been a while since I've looked at it.&amp;nbsp; In practice, I find very few shops are using SPDE.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Jim&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 11 Jun 2021 14:30:11 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-Hive-Hadoop-Efficiency-with-type-String/m-p/747354#M234552</guid>
      <dc:creator>jimbarbour</dc:creator>
      <dc:date>2021-06-11T14:30:11Z</dc:date>
    </item>
  </channel>
</rss>

