Hi Everyone,
I'm trying to identify duplicates by dataset and sql to compare performance. Following is the dataset:
data have ;
input id $ date $;
cards;
1 1/1/2001
1 1/1/2001
1 1/2/2002
1 1/2/2002
1 1/2/2003
1 1/2/2003
2 1/1/2005
2 1/1/2005
2 1/1/2005
2 1/2/2005
2 1/2/2005
2 1/3/2006
2 1/3/2006
;
run ;
I hope to create programs to have a counter that defines any repeats by id and date. The output is like the following:
id date count
1 1/1/2001 1
1 1/1/2001 2
1 1/2/2002 1
1 1/2/2002 2
1 1/2/2003 1
1 1/2/2003 2
2 1/1/2005 1
2 1/1/2005 2
2 1/1/2005 3
2 1/2/2005 1
2 1/2/2005 2
2 1/3/2006 1
2 1/3/2006 2
How to create the count in both datastep and proc sql?
Appreciate any help!
There exists a fairly widely spread illusion that one needs to know how to use the hash object only when one has a lot of data to process. Surely under many circumstances hash tables can speed things up quite a bit - for example, by making it unnecessary to sort large files when data need to be combined or aggregated. However, the main strength of the hash object in general is that it is an extremely flexible and convenient tool for dynamic programming, frequently lending itself to accomplishing in one step and/or single pass what otherwise would require several and doing it using simpler and more straightforward logic to boot.
The task you've posted in this thread is a good illustration. Imagine that your data are unsorted and look at the hash program doing what you want:
data have_unsorted ;
input id $ date $ ;
cards ;
2 1/3/2006
2 1/1/2005
2 1/1/2005
2 1/1/2005
2 1/2/2005
1 1/1/2001
1 1/2/2002
2 1/2/2005
2 1/3/2006
1 1/2/2002
1 1/2/2003
1 1/2/2003
1 1/1/2001
;
run ;
data want_unsorted ;
if _n_ = 1 then do ;
dcl hash h () ;
h.definekey ("id", "date") ;
h.definedata ("count") ;
h.definedone () ;
end ;
set have_unsorted ;
if h.find() ne 0 then count = 1 ;
else count + 1 ;
h.replace() ;
run ;
If you're unfamiliar with the hash object, it may look Greek to you, and yet its logic it exceedingly simple. Namely, for each record:
It's that simple. The hash table just keeps track of all previous counts for every (id,date) key-value encountered thus far. And because it automatically grows by 1 item every time a new (id,date) is seen, there's no need to pre-process the input to size it up at compile time - as it would be necessary, for example, if an array were used as the count-tracking table instead. Furthermore, when you search the table for the current record's (id,date) value to find what the previous value of count has been, this act of lookup takes the same time regardless of how many items have been stored in the table (say, ten or a million), as this is one of the hash object's properties. If you're interested, a brief compendium on things of this nature can be found here (penned by @DonH and yours truly):
http://support.sas.com/resources/papers/proceedings17/0821-2017.pdf
Kind regards
Paul Dorfman
Why not do counting in a PROC that SAS created specifically to do counting? And then you have very little programming to do.
proc freq data=have;
tables id*date/list /* optional, to create a SAS data set add this: out=counts */;
run;
By the way, I don't see how the row 2 1/2/2005 shows up in your output with a count of 4.
Proc SQL is not well suited for such sequential processing. Try this:
data want;
do count = 1 by 1 until (last.date);
set have; by id date;
output;
end;
run;
In the DATA step:
data have ;
input id $ date $ ;
cards;
1 1/1/2001
1 1/1/2001
1 1/2/2002
1 1/2/2002
1 1/2/2003
1 1/2/2003
2 1/1/2005
2 1/1/2005
2 1/1/2005
2 1/2/2005
2 1/2/2005
2 1/3/2006
2 1/3/2006
;
run ;
data want ;
if _n_ = 1 then do ;
dcl hash h () ;
h.definekey ("id", "date") ;
h.definedata ("count") ;
h.definedone () ;
end ;
set have ;
if h.find() ne 0 then count = 1 ;
else count + 1 ;
h.replace() ;
run ;
Note that the program makes no assumption that the input data are sorted. It works for unsorted data as well.
Kind regards
Paul D.
There exists a fairly widely spread illusion that one needs to know how to use the hash object only when one has a lot of data to process. Surely under many circumstances hash tables can speed things up quite a bit - for example, by making it unnecessary to sort large files when data need to be combined or aggregated. However, the main strength of the hash object in general is that it is an extremely flexible and convenient tool for dynamic programming, frequently lending itself to accomplishing in one step and/or single pass what otherwise would require several and doing it using simpler and more straightforward logic to boot.
The task you've posted in this thread is a good illustration. Imagine that your data are unsorted and look at the hash program doing what you want:
data have_unsorted ;
input id $ date $ ;
cards ;
2 1/3/2006
2 1/1/2005
2 1/1/2005
2 1/1/2005
2 1/2/2005
1 1/1/2001
1 1/2/2002
2 1/2/2005
2 1/3/2006
1 1/2/2002
1 1/2/2003
1 1/2/2003
1 1/1/2001
;
run ;
data want_unsorted ;
if _n_ = 1 then do ;
dcl hash h () ;
h.definekey ("id", "date") ;
h.definedata ("count") ;
h.definedone () ;
end ;
set have_unsorted ;
if h.find() ne 0 then count = 1 ;
else count + 1 ;
h.replace() ;
run ;
If you're unfamiliar with the hash object, it may look Greek to you, and yet its logic it exceedingly simple. Namely, for each record:
It's that simple. The hash table just keeps track of all previous counts for every (id,date) key-value encountered thus far. And because it automatically grows by 1 item every time a new (id,date) is seen, there's no need to pre-process the input to size it up at compile time - as it would be necessary, for example, if an array were used as the count-tracking table instead. Furthermore, when you search the table for the current record's (id,date) value to find what the previous value of count has been, this act of lookup takes the same time regardless of how many items have been stored in the table (say, ten or a million), as this is one of the hash object's properties. If you're interested, a brief compendium on things of this nature can be found here (penned by @DonH and yours truly):
http://support.sas.com/resources/papers/proceedings17/0821-2017.pdf
Kind regards
Paul Dorfman
If your data are sorted by ID/DATE, then:
data have ;
input id $ date $;
cards;
1 1/1/2001
1 1/1/2001
1 1/2/2002
1 1/2/2002
1 1/2/2003
1 1/2/2003
2 1/1/2005
2 1/1/2005
2 1/1/2005
2 1/2/2005
2 1/2/2005
2 1/3/2006
2 1/3/2006
run ;
data want;
set have;
by id date;
count+1;
if first.date then count=1;
run;
If your data are not sorted by ID/DATE, you could
proc sort data=have; by id date; run;
and then apply the program above.
You can accomplish this with SQL Passthrough using the ROW_NUMBER() function in SQL. Whether it would be faster than a DATASTEP is unknown to me though.
It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates.
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.