<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Dealing with huge tables on SAS 9.4 Too Slow in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896363#M354172</link>
    <description>&lt;P&gt;Hi everybody,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;I'm working with SAS 9.4 connecting to Hadoop/Cloudera with SAS ACCESS TO. When i try a query with&amp;nbsp;Pass-through,&amp;nbsp; it resolved fast in Hive Database, but the table is creating too slow in SAS. I think there are opportunities to improve the I/O perhaps.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I want some help to improve the I/O with huge table (&amp;gt;20 millions rows) on SAS 9.4, i have 3 compute server in SAS Grid with 8 cpu core and 64GB ram each one.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I know about options bufsize, bufno and blksize, but i dont have expertise to combine and get an efficient parameters to get better I/O.&lt;/P&gt;</description>
    <pubDate>Fri, 29 Sep 2023 03:43:23 GMT</pubDate>
    <dc:creator>claudiodejesusa</dc:creator>
    <dc:date>2023-09-29T03:43:23Z</dc:date>
    <item>
      <title>Dealing with huge tables on SAS 9.4 Too Slow</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896363#M354172</link>
      <description>&lt;P&gt;Hi everybody,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;I'm working with SAS 9.4 connecting to Hadoop/Cloudera with SAS ACCESS TO. When i try a query with&amp;nbsp;Pass-through,&amp;nbsp; it resolved fast in Hive Database, but the table is creating too slow in SAS. I think there are opportunities to improve the I/O perhaps.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I want some help to improve the I/O with huge table (&amp;gt;20 millions rows) on SAS 9.4, i have 3 compute server in SAS Grid with 8 cpu core and 64GB ram each one.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I know about options bufsize, bufno and blksize, but i dont have expertise to combine and get an efficient parameters to get better I/O.&lt;/P&gt;</description>
      <pubDate>Fri, 29 Sep 2023 03:43:23 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896363#M354172</guid>
      <dc:creator>claudiodejesusa</dc:creator>
      <dc:date>2023-09-29T03:43:23Z</dc:date>
    </item>
    <item>
      <title>Re: Dealing with huge tables on SAS 9.4 Too Slow</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896364#M354173</link>
      <description>&lt;P&gt;How slow is too slow? I suggest you post the SAS logs of both a fast Passthrough query and a slow SAS version of the same query. Without evidence we would just be guessing what is happening.&lt;/P&gt;</description>
      <pubDate>Fri, 29 Sep 2023 03:52:01 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896364#M354173</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2023-09-29T03:52:01Z</dc:date>
    </item>
    <item>
      <title>Re: Dealing with huge tables on SAS 9.4 Too Slow</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896367#M354176</link>
      <description>&lt;P&gt;I concur with what&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;&amp;nbsp;wrote.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Just one thought though: What potentially takes up the time is moving the data to the SAS side and writing the SAS table. Make sure that you don't have variables of type string on the Hadoop side that get mapped to a SAS CHAR(32767). If that happens cast the string to some VARCHAR() with an appropriate length on the Hive side via explicit passthrough SQL.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://go.documentation.sas.com/doc/en/pgmsascdc/9.4_3.5/acreldb/p1rj6miqsmhercn17lz0xatfqd4l.htm" target="_self"&gt;Issues When Converting Data from Hive to SAS&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Sep 2023 05:08:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Dealing-with-huge-tables-on-SAS-9-4-Too-Slow/m-p/896367#M354176</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2023-09-29T05:08:05Z</dc:date>
    </item>
  </channel>
</rss>

