- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I have a dataset that is supposed to be unique by my variable called arrangement_key, however on occasion its not unique, that's easy to identify the records causing duplication, but scanning all the variables in those duplicate records trying to find the culprit variable(s) is a pain, my dataset has 500 columns, is there an easy way to spit out a dataset that contains the culprit variable(s) that are causing the duplication? thank you in advance for your help.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Like this?
data HAVE;
array A[500] A1-A500;
do ID=1 to 10;
if ID=5 then do; A55 = 2; output; end;
output;
end;
run;
data _null_;
set HAVE;
by ID;
array ARR A1--A500; * index variables into an array;
if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values;
ARR[I]=lag(ARR[I]);
end;
if ^first.ID; * keep repeated observation;
do I=1 to dim(ARR); * seek different value(s) ;
if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=;
end;
run;
_N_=6 ID=5 A55=2
Replace
array ARR A1--A500;
with
array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm confused, if arrangement_key is supposed to be unique, but sometimes it isn't, what do yo mean by scanning all the other variables to find the culprit.
If your data are sorted by arrangement_key, you could split out the duplicates with a step like:
data
want /* no duplicates */
dups /* all duplicates */
;
set have ;
by arrangement_key ;
if NOT (first.arrangement_key and last.arrangement_key) then output dups ;
else output want ;
run;
After identifying the duplicate records, what do you mean about scanning variables to find culprits?
Next up: Troy Martin Hughes presents Calling Open-Source Python Functions within SAS PROC FCMP: A Google Maps API Geocoding Adventure on Wednesday April 23.
Register now at https://www.basug.org/events.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Yes it is supposed to be unique, I had to create the key from concatenation of other fields from our data warehouse, however there are occasions due to human error, where we end up getting two records that get pulled into our sas dataset has the same arrangement key, we then have to scan all the other fields seeing why we have two records, sometimes it our split rated indicator, show a split rating when there is not a split rating on the record, so we manually remove the records showing "Y". Other times it due miscoded zip code, or reg rating or pd, or any number of issues. Unfortunately, I'm not in charge of our data warehouse and how they pull the data together, but, we identifies these issues and pass the info on to IT. But we still need to remove and remedy the column causing the duplicates.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
A short program can do this, as long as you occasionally have two observations per ARRANGEMENT_KEY. It's trickier if you have three or more.
First, split the data. Remove all observations where ARRANGEMENT_KEY is unique. Then split the observations where there are duplicates. For example:
proc sort data=have;
by arrangement_key;
run;
data dup1 dup2;
set have;
by arrangement_key;
if first.arrangement_key and last.arrangement_key then delete;
if first.arrangement_key then output dup1;
else output dup2;
run;
So all original observations with a unique ARRANGEMENT_KEY have been deleted by the DELETE statement. Of those that remain, the first observation for ARRANGEMENT_KEY is in DUP1 and the second (hopefully there are only two) are in DUP2. Then just run PROC COMPARE to compare the two data sets. The software will do the work for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I'm assuming that you want to identify the culprit variables that's causing the duplication so that you can fix them. Below code will create the dataset you wanted, but it assumes there is max 1 additional duplicate. You may have to run this multiple times, or macrotize it however you want, if your data contains more than 1 duplicate observations for each arrangement_key.
data a1 a2;
set a;
by arrangement_key;
if first.arrangement_key ne last.arrangement_key;
if first.arrangement_key then output a1;
else output a2;
run;
proc compare b=a1 c=a2 outstats=diffs noprint;
id arrangement_key;
run;
data diffs (keep=_VAR_);
set diffs;
where _TYPE_='NDIF' and _BASE_ > 0;
run;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Like this?
data HAVE;
array A[500] A1-A500;
do ID=1 to 10;
if ID=5 then do; A55 = 2; output; end;
output;
end;
run;
data _null_;
set HAVE;
by ID;
array ARR A1--A500; * index variables into an array;
if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values;
ARR[I]=lag(ARR[I]);
end;
if ^first.ID; * keep repeated observation;
do I=1 to dim(ARR); * seek different value(s) ;
if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=;
end;
run;
_N_=6 ID=5 A55=2
Replace
array ARR A1--A500;
with
array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
@jimbobob Has this worked?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Perfect, thanks Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
@ChrisNZ wrote:
Like this?
data HAVE; array A[500] A1-A500; do ID=1 to 10; if ID=5 then do; A55 = 2; output; end; output; end; run; data _null_; set HAVE; by ID; array ARR A1--A500; * index variables into an array; if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values; ARR[I]=lag(ARR[I]); end; if ^first.ID; * keep repeated observation; do I=1 to dim(ARR); * seek different value(s) ; if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=; end; run;
_N_=6 ID=5 A55=2
Replace
array ARR A1--A500;
with
array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;
Untested, just a "thought experiment"...
Would:
proc sort data=foo out=bar dupout=dups noduprecs;
by key1 key2 key3 _all_;
run;
also work?
Please post your question as a self-contained data step in the form of "have" (source) and "want" (desired results).
I won't contribute to your post if I can't cut-and-paste your syntactically correct code into SAS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
It should work, but proc sort is expensive, and the question is specifically to find which variables cause the duplication, since there are hundreds to review.
PS sorry about not replying, will reply soon. 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
@jimbobob wrote:
I have a dataset that is supposed to be unique by my variable called arrangement_key, however on occasion its not unique, that's easy to identify the records causing duplication, but scanning all the variables in those duplicate records trying to find the culprit variable(s) is a pain, my dataset has 500 columns, is there an easy way to spit out a dataset that contains the culprit variable(s) that are causing the duplication? thank you in advance for your help.
You don't say what process is causing the duplicate rows. Is it a single table you're concerned about or the result of an SQL join (undesired Cartesian Product) across two or more tables? If you KNOW that your table has non-unique keys, and is participating in a join, you should address this first.
Here is SQL code that will identify duplicate keys, although it is functionally equivalent to SAS code previously posted:
proc sql;
create table want as
select key1, key2, key3, count(0) as count
from have
group by key1, key2, key3
having count(0) > 1;
quit;
You can filter on the count column, looking for rows with > 2, > 3, > 4, etc. duplicates.
At this point I usually just "eyeball" the data, scrolling right and skimming for satellite (non-key) columns where the data changes.
But, again, your first step is to ensure your source table keys are, in fact, keys. https://www.google.com/search?q=what+is+a+key+column
Please post your question as a self-contained data step in the form of "have" (source) and "want" (desired results).
I won't contribute to your post if I can't cut-and-paste your syntactically correct code into SAS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
You only need to scan for duplicates of the key variable(s), the others are irrelevant for the join.
Please post the complete log of the code that causes the duplicates; use the </> button to post it.