03-08-2018 01:20 PM
I am working with some large datasets (billions of obs) and found there are duplicates of the key values which I want to remove all but the first. The file is sorted by key variables (say A and B) and is indexed by them as well (call it myndx).
The first dataset I am working with I have a list of the key values which have dupes. I was thinking I could use a MODIFY statement with the key= option to step through it, removing dupes.
The Statements Reference manual refers to using a DO loop to process successive copies but gives no examples. Has anyone done this already and can share the details on the method used or suggest an alternate approach? The preference is to use the existing file(s) in-place rather than conventional Data step replacement.
03-08-2018 01:32 PM
Does the table exists in database or is it a SAS Table?
If its a database tables then try using pass-through query by sending your table which contains duplicate values into database.
03-08-2018 03:37 PM
Something like below should work.
data dupList; do id=2,5,7,9; output; end; stop; run; data big; var='some string'; do id=2,7; output; end; do id=1 to 10; output; end; do id=2,5,7,9; output; end; stop; run; data big; if _n_=1 then do; dcl hash dupList(dataset:'dupList'); _rc=dupList.defineKey('id'); _rc=dupList.defineDone(); dcl hash remList(dataset:'dupList(obs=0)'); _rc=remList.defineKey('id'); _rc=remList.defineDone(); end; modify big; if remList.check()=0 then do; remove; end; else if dupList.check()=0 then do; _rc=remList.add(); _rc=dupList.remove(); end; run; proc print data=big; run;
03-08-2018 03:45 PM
And should you have sufficient memory to store all the keys in memory then you could also take a more direct approach.
data big; if _n_=1 then do; dcl hash dupList(); _rc=dupList.defineKey('id'); _rc=dupList.defineDone(); end; modify big; if dupList.check()=0 then do; remove; end; else dupList.add(); run; proc print data=big; run;
03-09-2018 03:20 AM - edited 03-09-2018 03:22 AM
Using the index to get at the duplicates is a very good idea, you may try something like this:
data big; set dupes(keep=<key variables>); first=1; do until(0); modify big key=<your index name>; if _iorc_ then do; _error_=0; /* when _iorc_ is set, an error is provoked */ leave; end; if not first then remove; else first=0; /* the next obs is not the first */ end; run;