BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
miriamm
Calcite | Level 5

I have two very large datasets that I am working with.  The first dataset (1.6 million records) has client ids.  The second data set, which is a .txt file with over 4 million records, has the demographic information for all the clients we have ever seen.  I need to create a SAS dataset from the raw data file for only those 1.6 million people from the first dataset.  I have tried this a couple of ways.  The first method was very simple because I was in a hurry.  I just simply read in all the demographics data and saved it.  This is obviously not the best answer because it has a lot on unneccessary data and takes up a huge amount of space (40G). The second method I tried worked to reduce the file size but increased the processing time and memory usage significantly.  I created macros for every unique clientid from ds1 and then did the input statement for ds2 in 2 phases.  The first input statement only brought the ID variable in and then compared it to macro variables to see if it existed.  if it was a macro variable then all the other data was brought in. If not, then the record was ignored.  It took 16 hours alone to create the macro variables, not to mention the rest of the ds2 creation.

My question is... Does anyone have a better method of conditionally creating a SAS dataset from raw data?  One that might be more time and memory efficient while preserving space in the long run.

Any and all thoughts are appreciated.


1 ACCEPTED SOLUTION

Accepted Solutions
AhmedAl_Attar
Rhodochrosite | Level 12

Hi Miriamm,

The best way I can recommend would be using Hash Object for the 1.6M lookup table, while reading through your 4M records text file. for example:

data Macth;

    infile file-specification;

    input clientid @;

  

    if _N_ = 1 then

    do;

        declare hash h(dataset:'lookup');

        h.defineKey('name');

        h.defineDone();

    end;

  

    /* A match was found */

    if h.find() = 0 then

    do;

        /* Additional Statements */

    end;

    else

    do;

        delete;

    end;

run;

View solution in original post

5 REPLIES 5
Tom
Super User Tom
Super User

You could define the INPUT statements as a VIEW.  Then if your raw data is sorted by ID you could merge this view with the list of patients and only output the selected records.

data big / view=big;

  infile 'big.txt' ;

  input patid ...... ;

  ....

run;

data want ;

  merge big patlist(in=in1);

  by patid;

  if in1;

run;

If it is not loaded then you could use a hash or SET with and index on the list of patients to determine which records to output when reading the big raw data file.

Astounding
PROC Star

Another possibility is to create a format based on the patient IDs in PATLIST.  No sorting is necessary, but you need enough memory to be able to hold the format.

data keep_these;

   set patlist (keep=patid);

   retain fmtname '$keep'  label 'Keep Me';

   rename patid=start;

run;

proc format cntlin=keep_these;

run;

This creates a format that tanslates the desired patient IDs into the words Keep Me.  It assumes PATID is character (if it isn't, remove the dollar sign from the value of FMTNAME).  If the list is static, formats can be saved permanently for use by many later programs.

Applying the format is easier:

data want;

   infile 'big.txt';

   input @1 patid $10.  @;

   if put(patid, $keep.) = 'Keep Me';

   input ... the remaining variables ...;

run;

Good luck.

AhmedAl_Attar
Rhodochrosite | Level 12

Hi Miriamm,

The best way I can recommend would be using Hash Object for the 1.6M lookup table, while reading through your 4M records text file. for example:

data Macth;

    infile file-specification;

    input clientid @;

  

    if _N_ = 1 then

    do;

        declare hash h(dataset:'lookup');

        h.defineKey('name');

        h.defineDone();

    end;

  

    /* A match was found */

    if h.find() = 0 then

    do;

        /* Additional Statements */

    end;

    else

    do;

        delete;

    end;

run;

miriamm
Calcite | Level 5

Thank you very much for all the responses.  I tried many of them and the hash objects worked the best.  I tried to do the flag on the answer but it was not letting me so I thought I would just reply with a comment.  This groups help is always appreciated.   I was able to process the tables in less than 30 minutes using the hash object.  A far cry from the several days it took to run when I first inherited the program!

chang_y_chung_hotmail_com
Obsidian | Level 7

I agree with Ahmed on using hash. Here is a working example. hth.

  ods _all_ close;
  ods listing;
  options nocenter;
 
  %let seed = 1234567;
 
  /* 1.6M id^s */
  data one;
    do id = 1 to 1.6e6;
      output;
    end;
  run;
 
  /* 4M id^s and a var written out to a text file */
  data two;
    do id = 1 to 4e6;
      x = ceil(4e6 * ranuni(&seed));
      output;
    end;
  run;
  data _null_;
    file 'work.temp.two.source' catalog;
    set two;
    put (id x) (7.0);
  run;
 
  /* reading only the two obs whose id matches first -- using hash */
  data three;
    if _n_ = 1 then do;
      dcl hash h(dataset:"one");
      h.definekey('id');
      h.definedone();
      call missing(id);
    end;
 
    infile 'work.temp.two.source' catalog;
    input id 7.0 @;
    if h.find() = 0 then do;
      input x 7.0 @;
      output;
    end;
    input;
  run;
 
  /* check */
  proc sort data=three;
    by id;
  run;
  proc compare base=two(where=(id<=1.6e6)) compare=three;
  run;
  /* on lst
  NOTE: No unequal values were found. All values compared are exactly equal
  */

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 5 replies
  • 2613 views
  • 4 likes
  • 5 in conversation