<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Thoughts on using SAS compression in SAS Data Management</title>
    <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/621341#M18727</link>
    <description>&lt;P&gt;&lt;STRONG&gt;SAS® Certification Prep Guide - Advanced Programming for SAS®9 Fourth Edition&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;Page 685:&lt;/U&gt; By default, a SAS data file is uncompressed. You can compress your data files in order to conserve disk space, although some files are not good candidates for compression.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;Page 686:&lt;/U&gt; Remember that in order for SAS to read a compressed file, each observation must be uncompressed. This requires more CPU resources than reading an uncompressed file.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;Page 688:&lt;/U&gt; A file that has been compressed using the BINARY setting of the COMPRESS= option takes significantly more CPU time to uncompress than a file that was compressed with the YES or CHAR setting. BINARY is more efficient with observations that are several hundred bytes or more in length. BINARY can also be very effective with character data that contains patterns rather than simple repetitions.&amp;nbsp; &lt;EM&gt;[binary = RDC, char = RLE]&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you create a compressed data file, SAS compares the size of the compressed file to the size of the uncompressed file of the same page size. Then SAS writes a note to the log indicating the size reduction percent that is obtained by compressing the file.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you use either of the COMPRESS= options, SAS calculates the size of the overhead that is introduced by compression as well as the maximum size of an observation in the data set that you are attempting to compress. If the maximum size of the observation is smaller than the overhead that is introduced by compression, SAS disables compression, creates an uncompressed data set, and issues a warning message stating that the file was not compressed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Comment&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;I work in a UK financial institution, where the OPTION COMPRESS=BINARY is set as a default option. The previous UK financial institution I worked for had OPTION COMPRESS=NO.&amp;nbsp; My previous employer struggled so much with storage so that we had to Unix compress datasets with gzip. At 3am our ops team sent an email saying our server was 0 bytes available, with our daily batch starting at 5am. That needed datasets to be deleted to create space to gzip other files, to create space for the vital daily jobs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Only recently have I looked at improving Run Time on some jobs at my current employer.&amp;nbsp; I’ve seen the message in the log saying that uncompressed the file will take fewer pages. A temp dataset of 2.4bn obs x 64 byte record length (8x8 byte number) used 200Gb, but 2.4bn x 64 bytes is 145Gb.&amp;nbsp; The values in the variables are unique (a Customer ID) a month end date (integer) an integer count and 5 floats. COMPRESS =BINARY on this dataset looks like a lose-lose.&amp;nbsp; Sorting this took 9 hours, but hen proc sql (or dataset by/if last) group processing takes only 17 mins to output 23m obs and summary variables.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My current site is not so worried about storage (but large datasets and monitored and respectfully asked if they are needed). &amp;nbsp;Most jobs run in good time, as the available resources have grown.&amp;nbsp; A few jobs take long enough to be a problem. The best way to improve these is by cutting down the time for I/O by reducing the size/number of pages in the file.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It does make me wonder how many of our 1000s of production datasets (with relatively short record lengths) are compressed and using more space, more I/O and more run time.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a project coming up using 1.2bn financial transaction observations, for 20+ users.&amp;nbsp; This includes a long text description. Whether to override COMPRESS=BINARY will a choice I make. I’m giving serious thought to replacing this with a 5-byte surrogate key and using a Hash Object as a non-ordered lookup to reduce record length. Dates will be 5 (or 4) bytes, CUID will likely fit in 5-bytes and if we have account number (char 15), that is another candidate for a surrogate key. I may even accept loss of precision on amounts.&amp;nbsp; With a proper sort and indexes to meet expected use, I have to get a 200Gb raw dataset so that my user query run times are less than 5 mins.&amp;nbsp; I will likely create subsets of records to get the run time down to less than 1 minute. If only we were a Viya site and not 9.4. none of this would be needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For me, the only reason for using COMPRESS=[BINARY, YES] is to reduce the number of pages used and create a smaller file and less I/O. If it does not, set = NO for that dataset. Given my company’s data (over 1000s of datasets) is probably more numeric than long text, BINARY seems a sensible default, even if it sometimes shoots you in the foot.&amp;nbsp; But you don’t need to pull the trigger.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 30 Jan 2020 23:33:50 GMT</pubDate>
    <dc:creator>PhilGee</dc:creator>
    <dc:date>2020-01-30T23:33:50Z</dc:date>
    <item>
      <title>Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614088#M18588</link>
      <description>&lt;P&gt;Hello SAS community. In my SAS shop we have many users and can work with large datasets. We use SAS 9.4 on a Linux OS. Every so often we are reminded to delete datasets no longer needed or to consider compressing datasets using SAS compression.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As I am sure many of you already know SAS currently offers two compression algorithms, character (using the RLE algorithm ) or binary (using the RDC algorithm). My understanding is character compression generally works better when the data set is mostly character data. The binary compression generally works better when the data set has mostly numeric data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I would love to hear from the community how you determine if SAS compression should be used? Is there an official policy? Or does each user decide for themselves if they want to compress their data sets? If you do use SAS compression how are you deciding which option (char or binary) to use? Are there any good "rules of thumb" that can be applied?&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I understand the disadvantages of SAS compression can be increased CPU usage and actually increasing the size of the data set. Is there any other disadvantages you have come across using SAS compression?&amp;nbsp; Any other considerations that should be taken into account before compressing data sets?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 15:07:24 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614088#M18588</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-27T15:07:24Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614101#M18589</link>
      <description>&lt;P&gt;I've found that normalizing and compressing the datasets saves more spaces than compression alone.&amp;nbsp; Getting rid of redundant data really pays off.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 15:55:07 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614101#M18589</guid>
      <dc:creator>tomrvincent</dc:creator>
      <dc:date>2019-12-27T15:55:07Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614104#M18590</link>
      <description>&lt;P&gt;Thanks &lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/144199"&gt;@tomrvincent&lt;/a&gt;&amp;nbsp;, that makes a lot of sense. Are you using one of the SAS compression options when you compress your datasets? If so, how do you decide which one to use?&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 16:01:42 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614104#M18590</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-27T16:01:42Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614107#M18591</link>
      <description>&lt;P&gt;Hi, supp&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After never having used compression in 30 years, I recently bumped across a use case where it worked perfectly. This is a text analysis project, with a huge number of text strings, of which the length can vary wildly (most short, but a few very long).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I can't remember the exact number, but using the RLE algorithm reduced the size of the datasets by about 90%. I didn't notice any difference in processing time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Tom&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 16:14:32 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614107#M18591</guid>
      <dc:creator>TomKari</dc:creator>
      <dc:date>2019-12-27T16:14:32Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614109#M18592</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/15142"&gt;@TomKari&lt;/a&gt;&amp;nbsp;, that is a really good result!&amp;nbsp; Being your data is mostly (or all) character data I am guessing RLE give you the best result.&amp;nbsp;Out of curiosity did you also try binary compression?&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 16:19:15 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614109#M18592</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-27T16:19:15Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614112#M18593</link>
      <description>What I do is compress each resulting normalized dataset (dimension/fact table, if you will) if doing so actually saves space (sometimes it doesn't).  I haven't bothered to try the different options just because I've already saved thru normalization.  The rest is gravy. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;</description>
      <pubDate>Fri, 27 Dec 2019 16:35:40 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614112#M18593</guid>
      <dc:creator>tomrvincent</dc:creator>
      <dc:date>2019-12-27T16:35:40Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614116#M18594</link>
      <description>I agree that you are describing a good use case.   The downside of sas compressed datasets is that you are restricted to sequential data processing.   I.e. no indexes, no SET ... POINT=. However normalization without  compression might save substantial space and still support direct access.</description>
      <pubDate>Fri, 27 Dec 2019 16:52:22 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614116#M18594</guid>
      <dc:creator>mkeintz</dc:creator>
      <dc:date>2019-12-27T16:52:22Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614118#M18595</link>
      <description>&lt;P&gt;No, given that the expected compression benefits would come from the long character fields, I only used the RLE algorithm.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 16:54:07 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614118#M18595</guid>
      <dc:creator>TomKari</dc:creator>
      <dc:date>2019-12-27T16:54:07Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614125#M18596</link>
      <description>&lt;P&gt;Almost all our production datasets are compressed with character (RLE). When writing a new batch job, I use compress=yes and look what the log tells me. If I get less than 10% compression rate, I omit compression.&lt;/P&gt;
&lt;P&gt;Not only does compression save disk space, it also speeds up the ETL processes which are (almost) always I/O bound and benefit from having to move less data from and to storage. Care must be taken when processing datasets with a high compression rate. Sorting can overload the SASUTIL location, so using tagsort may be necessary. And don't forget to also use compression in WORK when dealing with such datasets.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 17:47:06 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614125#M18596</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2019-12-27T17:47:06Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614128#M18597</link>
      <description>&lt;P&gt;Great points&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/11562"&gt;@Kurt_Bremser&lt;/a&gt;&amp;nbsp;! Is there ever a scenario you would use binary compression instead of char? Or why do you prefer to use char compression?&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 18:01:09 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614128#M18597</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-27T18:01:09Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614148#M18598</link>
      <description>&lt;P&gt;Where I work we decided to set compression to BINARY as a default option when starting all SAS sessions via SAS AUTOEXEC programs. We have been doing this for both SAS 9.3 and 9.4 over many years. This has worked well for us by conserving disk space and also ensuring we almost never run out of WORK space unless someone does a really silly query.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We consume a lot of data from SQL Server and the BINARY setting means we don't need to worry about shortening long character variables as they are all compressed. Uncompressed tables can be often around 5 times larger than uncompressed ones! For the type of datasets we have BINARY provides better compression than CHARACTER.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I also know of other SAS sites that compress their data by default.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 21:55:38 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614148#M18598</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2019-12-27T21:55:38Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614150#M18599</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/13976"&gt;@SASKiwi&lt;/a&gt;&amp;nbsp;. You stated that for your type of datasets binary is the better option. Is this because you have a lot of numeric data in your datasets? Or how did you determine binary was the better option?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/31461"&gt;@mkeintz&lt;/a&gt;&amp;nbsp;mentioned a compressed dataset can't take advantage of an index, do you ever find this to be problematic?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 22:18:55 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614150#M18599</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-27T22:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614163#M18600</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/18331"&gt;@supp&lt;/a&gt;&amp;nbsp; - I did some testing on some of our typical datasets and found BINARY gave better compression of around 10 percent. I also found that SAS jobs processing large datasets ran faster because of the reduced IO. While elapsed time was less CPU time increased, but only by a few percent. IMO universal compression is definitely worth considering if you process a lot of medium to large datasets.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Dec 2019 22:58:53 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614163#M18600</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2019-12-27T22:58:53Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614182#M18601</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/31461"&gt;@mkeintz&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;I agree that you are describing a good use case. The downside of sas compressed datasets is that you are restricted to sequential data processing. I.e. no indexes, no SET ... POINT=. However normalization without compression might save substantial space and still support direct access.&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/31461"&gt;@mkeintz&lt;/a&gt;&amp;nbsp;&amp;nbsp;Are you sure? That's not what the SAS log tells me.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;28         options ps=max msglevel=i;
29         data have(compress=yes index=(indvar));
30           length charvar1000 $1000;
31           call missing(charvar1000);
32           do i=1 to 100000;
33             if mod(i,10)=1 then indvar+1;
34             output;
35           end;
36           stop;
37         run;

NOTE: The data set WORK.HAVE has 100000 observations and 3 variables.
INFO: Multiple concurrent threads will be used to create the index.
NOTE: Simple index indvar has been defined.
NOTE: Compressing data set WORK.HAVE decreased size by 96.23 percent. 
      Compressed is 59 pages; un-compressed would require 1565 pages.
NOTE: DATA statement used (Total process time):
      real time           0.10 seconds
      cpu time            0.12 seconds
      

38         
39         data want1;
40           do point=1 to nobs by 1000;
41             set have point=point nobs=nobs;
42             output;
43           end;
44           stop;
45         run;

NOTE: The data set WORK.WANT1 has 100 observations and 3 variables.
NOTE: DATA statement used (Total process time):
      real time           0.00 seconds
      cpu time            0.00 seconds
      

46         
47         proc sql;
48           create table want2 as
49           select *
50           from have
51           where indvar=10
52           ;
INFO: Index indvar selected for WHERE clause optimization.
NOTE: Table WORK.WANT2 created, with 10 rows and 3 columns.

53         quit;
NOTE: PROCEDURE SQL used (Total process time):
      real time           0.00 seconds
      cpu time            0.01 seconds&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 02:20:54 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614182#M18601</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2019-12-28T02:20:54Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614184#M18603</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/265744"&gt;@Patrik&lt;/a&gt;, thanks for sharing your findings! That is very interesting. Do you mind sharing what version of SAS and Operating System were used for your tests?&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 02:33:20 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614184#M18603</guid>
      <dc:creator>supp</dc:creator>
      <dc:date>2019-12-28T02:33:20Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614186#M18604</link>
      <description>&lt;PRE&gt;AUTOMATIC SYSVLONG 9.04.01M5P091317
AUTOMATIC SYSHOSTINFOLONG Linux LIN X64 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 CentOS Linux release 7.5.1804 (Core) &lt;/PRE&gt;</description>
      <pubDate>Sat, 28 Dec 2019 02:37:08 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614186#M18604</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2019-12-28T02:37:08Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614187#M18605</link>
      <description>&lt;P&gt;Before considering compression, think about whether you have good rules in place for the lengths used within the SAS data set.&amp;nbsp; Many times, I have seen SAS data sets use $200 characters for a field that only needs a few characters ... usually because the database definition for the field was varchar200.&amp;nbsp; Those who set up the field in the data base took advantage of the fact that varchar automatically adjusts to the number of characters needed.&amp;nbsp; But when extracted, SAS uses the full length of 200 every time.&amp;nbsp; If you have processes to examine fields and the length that they actually require, you will be a step ahead of the game whether or not you add compression afterwards.&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 03:07:21 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614187#M18605</guid>
      <dc:creator>Astounding</dc:creator>
      <dc:date>2019-12-28T03:07:21Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614189#M18606</link>
      <description>&lt;P&gt;Well, I'm definitely not sure any more.&amp;nbsp; My understanding was that some compressions generated irregular observation lengths, making the implementation of direct access exceedingly tricky, and not implemented by SAS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SAS index records the index value and each RID (record id) associated with the index value.&amp;nbsp; When every record is the same size, as would be the case with uncompressed SAS data sets, knowing the RID lets you know exactly which physical page (which are also of constant size) of data contains the record(s) of interests, which in turn means you can directly access only the pages needed. &amp;nbsp; I'm not clear on how knowing the RID for compressed data sets can let you know which pages to read, … unless the compression keeps the observations uniform in length.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for the example.&amp;nbsp; I'll have to re-map my understanding of SAS compression.&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 04:20:15 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614189#M18606</guid>
      <dc:creator>mkeintz</dc:creator>
      <dc:date>2019-12-28T04:20:15Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614198#M18607</link>
      <description>&lt;P&gt;You "only" have to switch from observation &lt;EM&gt;number&lt;/EM&gt; to observation &lt;EM&gt;start position&lt;/EM&gt; to address the observation from an index. Given 64-bit processing, this is not so hard to do.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;PS it might be that binary compression puts observation boundaries within bytes, and then this would not work anymore. Maybe &lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/12447"&gt;@Patrick&lt;/a&gt;&amp;nbsp;could rerun his experiment with compression=binary to clear this up?&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 08:20:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614198#M18607</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2019-12-28T08:20:46Z</dc:date>
    </item>
    <item>
      <title>Re: Thoughts on using SAS compression</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614226#M18608</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/18331"&gt;@supp&lt;/a&gt; - Regarding limitations with compression and indexes. As stated elsewhere, we universally compress with the BINARY option and never had any problems with indexes. I'm pretty sure we don't use direct access (POINT=), but if we did you can just add COMPRESS = NO to the DATA steps using it to avoid any problems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please note that universal compression means WORK libraries as well so the benefits of compressing apply to controlling WORK space as well.&lt;/P&gt;</description>
      <pubDate>Sat, 28 Dec 2019 20:21:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Thoughts-on-using-SAS-compression/m-p/614226#M18608</guid>
      <dc:creator>SASKiwi</dc:creator>
      <dc:date>2019-12-28T20:21:41Z</dc:date>
    </item>
  </channel>
</rss>

