Firstly apologies if this question is in the wrong place.
Secondly, more apologies for being such a newbie and having to ask about such basic things!
A bit of a theoretical question for you all.
How would you go about deleting all duplicate observations from a dataset??
eg: I have two datasets containing the same information, downloaded a month apart. The latest dataset contains all the observations which appear in the earlier set plus a few dozen new observations.
I know NODUP and NODUPKEY only write the first occurrence of a duplicate to the set you’re creating. That’s great, but I want to remove both occurrences of the duplicates, leaving me with a dataset that contains only the new observations.
How would you distinguish "..only the new observation"? If there is some event-date/time variable, you can sort the file with your "base" BY variable list along with DESCENDING -- but don't code NODUPKEY in the first sort.
Then issue a second SORT using NODUPKEY along with the EQUALS parameter while only providing your "base" BY variable list.
The other alternative using a DATA step is to ensure that your input file gets sorted into the desired sequence, then use a DATA step with a SET and a BY statement with your "base" BY variable list. And in an IF /THEN OUTPUT; statement, use the LAST. technique so that only the last-occurrence of any "by variable group" of observations (presumably the newest obs?) will be output.
Suggested Google advanced search arguments, this topic / post:
To RickM: How would the PROC SQL example address the OP's stated objective with removing all duplicates within a given SAS file, while only retaining the "most recent" observation for a given "by variable list"?
From the OP:
"eg: I have two datasets containing the same information, downloaded a month apart. The latest dataset contains all the observations which appear in the earlier set plus a few dozen new observations. "
So the problem, as I interpreted it, is that they are starting out with two data sets. The other solutions assume the data is already combined and removing duplicates within one dataset is the only way to solve the problem. Why not find the new values before the two datasets are put together.
merge one (in=a) two (in=b);
if a and b then from='M';
if a and not b then from='A';
if not a and b then from='B';
proc freq data=match;
*/the observations where from='B' are in the 2nd set only*/
if from ne 'B' then delete;
I probably didn't explain things as clearly as I could, so to clear things up, my problem was two data sets ('this month' and 'last month') with identical variables (although the values of one variable 'annualpremium' may differ from month to month). 'last month' has an unidentified amount of observations which have been dropped and are not contained in 'this month'.
I needed to produce one dataset containing only the dropped observations, ie: those that appear in 'last month' that don't appear in 'this month'. No other criteria at all.
This is how I managed it in the end, probably not the most efficient method but it seems to have generated the right results........
create table combined as
t2.PolicyNumber as PolicyNumber2
from thismonth as t1
lastmonth as t2
on t1.PolicyNumber = t2.PolicyNumber;
data droppedpolicy (keep = PolicyNumber OriginalStartDate AnnualPremium);
if PolicyNumber = PolicyNumber2 then delete;