DATA Step, Macro, Functions and more

Removing duplicate observations in-place in large datasets

Reply
New Contributor
Posts: 3

Removing duplicate observations in-place in large datasets

Hi,

 

I am working with some large datasets (billions of obs) and found there are duplicates of the key values which I want to remove all but the first.  The file is sorted by key variables (say A and B) and is indexed by them as well (call it myndx).

 

The first dataset I am working with I have a list of the key values which have dupes.  I was thinking I could use a MODIFY statement with the key= option to step through it, removing dupes.

 

The Statements Reference manual refers to using a DO loop to process successive copies but gives no examples.  Has anyone done this already and can share the details on the method used or suggest an alternate approach?  The preference is to use the existing file(s) in-place rather than conventional Data step replacement.

 

Thanks!

 

--Ben

Valued Guide
Posts: 597

Re: Removing duplicate observations in-place in large datasets

Does the table exists in database or is it a SAS Table?

If its a database tables then try using pass-through query by sending your table which contains duplicate values into database.

 

Thanks,
Suryakiran
New Contributor
Posts: 3

Re: Removing duplicate observations in-place in large datasets

Posted in reply to SuryaKiran

Everything is in SAS...

 

--Ben

Respected Advisor
Posts: 4,736

Re: Removing duplicate observations in-place in large datasets

Something like below should work.

data dupList;
  do id=2,5,7,9;
    output;
  end;
  stop;
run;

data big;
  var='some string';
  do id=2,7;
    output;
  end;
  do id=1 to 10;
    output;
  end;
  do id=2,5,7,9;
    output;
  end;
  stop;
run;

data big;
  if _n_=1 then
    do;
      dcl hash dupList(dataset:'dupList');
      _rc=dupList.defineKey('id');
      _rc=dupList.defineDone();
      dcl hash remList(dataset:'dupList(obs=0)');
      _rc=remList.defineKey('id');
      _rc=remList.defineDone();
    end;

  modify big;

  if remList.check()=0 then 
    do;
      remove;
    end;
  else if dupList.check()=0 then 
    do;
      _rc=remList.add();
      _rc=dupList.remove();
    end;

run;

proc print data=big;
run;

Highlighted
Respected Advisor
Posts: 4,736

Re: Removing duplicate observations in-place in large datasets

And should you have sufficient memory to store all the keys in memory then you could also take a more direct approach.

data big;
  if _n_=1 then
    do;
      dcl hash dupList();
      _rc=dupList.defineKey('id');
      _rc=dupList.defineDone();
    end;

  modify big;

  if dupList.check()=0 then 
    do;
      remove;
    end;
  else dupList.add();

run;

proc print data=big;
run;
PROC Star
Posts: 266

Re: Removing duplicate observations in-place in large datasets

[ Edited ]

Using the index to get at the duplicates is a very good idea, you may try something like this:

 

data big;
  set dupes(keep=<key variables>);
  first=1;
  do until(0);
    modify big key=<your index name>;
    if _iorc_ then do;
       _error_=0; /* when _iorc_ is set, an error is provoked */
       leave;
       end;
    if not first then 
      remove;
    else 
      first=0; /* the next obs is not the first */
    end;
run;

 

Ask a Question
Discussion stats
  • 5 replies
  • 137 views
  • 1 like
  • 4 in conversation