BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
jimbobob
Quartz | Level 8

I have a dataset that is supposed to be unique by my variable called arrangement_key, however on occasion its not unique, that's easy to identify the records causing duplication, but scanning all the variables in those duplicate records trying to find the culprit variable(s) is a pain, my dataset has 500 columns, is there an easy way to spit out a dataset that contains the culprit variable(s) that are causing the duplication? thank you in advance for your help.

1 ACCEPTED SOLUTION

Accepted Solutions
ChrisNZ
Tourmaline | Level 20

Like this?

 

data HAVE;
  array A[500] A1-A500;
  do ID=1 to 10;
    if ID=5 then do; A55 = 2; output; end;
    output;
  end;
run;

data _null_;
  set HAVE;
  by ID;
  array ARR A1--A500;                        * index variables into an array;
  if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values;
    ARR[I]=lag(ARR[I]);
  end;
  if ^first.ID;                              * keep repeated observation;
  do I=1 to dim(ARR);                        * seek different value(s)  ;
    if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=;
  end;
run;

_N_=6 ID=5 A55=2

 

Replace

array ARR A1--A500;

with

array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;

 

View solution in original post

11 REPLIES 11
Quentin
Super User

Hi,

 

I'm confused, if arrangement_key is supposed to be unique, but sometimes it isn't, what do yo mean by scanning all the other variables to find the culprit.

 

If your data are sorted by arrangement_key, you could split out the duplicates with a step like:

 

data
  want   /* no duplicates */ 
  dups   /* all duplicates */
  ;
  set have ;
  by arrangement_key ;
  if NOT (first.arrangement_key and last.arrangement_key)  then output dups ;
  else output want ;
run;

After identifying the duplicate records, what do you mean about scanning variables to find culprits?

BASUG is hosting free webinars Next up: Mark Keintz presenting History Carried Forward, Future Carried Back: Mixing Time Series of Differing Frequencies on May 8. Register now at the Boston Area SAS Users Group event page: https://www.basug.org/events.
jimbobob
Quartz | Level 8

Yes it is supposed to be unique, I had to create the key from concatenation of other fields from our data warehouse, however there are occasions due to human error, where we end up getting two records that get pulled into our sas dataset has the same arrangement key,  we then have to scan all the other fields seeing why we have two records, sometimes it our split rated indicator, show a split rating when there is not a split rating on the record, so we manually remove the records showing "Y". Other times it due miscoded zip code, or reg rating or pd, or any number of issues. Unfortunately, I'm not in charge of our data warehouse and how they pull the data together, but, we identifies these issues and pass the info on to IT. But we still need to remove and remedy the column causing the duplicates.

Astounding
PROC Star

A short program can do this, as long as you occasionally have two observations per ARRANGEMENT_KEY.  It's trickier if you have three or more.

 

First, split the data.  Remove all observations where ARRANGEMENT_KEY is unique.  Then split the observations where there are duplicates.  For example:

proc sort data=have;
   by arrangement_key;
run;
data dup1 dup2;
   set have;
   by arrangement_key;
   if first.arrangement_key and last.arrangement_key then delete;
   if first.arrangement_key then output dup1;
   else output dup2;
run;

So all original observations with a unique ARRANGEMENT_KEY have been deleted by the DELETE statement.  Of those that remain, the first observation for ARRANGEMENT_KEY is in DUP1 and the second (hopefully there are only two) are in DUP2.  Then just run PROC COMPARE to compare the two data sets.  The software will do the work for you.

vellad
Obsidian | Level 7

I'm assuming that you want to identify the culprit variables that's causing the duplication so that you can fix them. Below code will create the dataset you wanted, but it assumes there is max 1 additional duplicate. You may have to run this multiple times, or macrotize it however you want, if your data contains more than 1 duplicate observations for each arrangement_key.

data a1 a2;
  set a;
  by arrangement_key;
  if first.arrangement_key ne last.arrangement_key;
  if first.arrangement_key then output a1;
  else output a2;
run;

proc compare b=a1 c=a2 outstats=diffs noprint;
  id arrangement_key;
run;

data diffs (keep=_VAR_);
  set diffs;
  where _TYPE_='NDIF' and _BASE_ > 0;
run;
ChrisNZ
Tourmaline | Level 20

Like this?

 

data HAVE;
  array A[500] A1-A500;
  do ID=1 to 10;
    if ID=5 then do; A55 = 2; output; end;
    output;
  end;
run;

data _null_;
  set HAVE;
  by ID;
  array ARR A1--A500;                        * index variables into an array;
  if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values;
    ARR[I]=lag(ARR[I]);
  end;
  if ^first.ID;                              * keep repeated observation;
  do I=1 to dim(ARR);                        * seek different value(s)  ;
    if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=;
  end;
run;

_N_=6 ID=5 A55=2

 

Replace

array ARR A1--A500;

with

array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;

 

jimbobob
Quartz | Level 8

Perfect, thanks Chris

ScottBass
Rhodochrosite | Level 12

@ChrisNZ wrote:

Like this?

 

data HAVE;
  array A[500] A1-A500;
  do ID=1 to 10;
    if ID=5 then do; A55 = 2; output; end;
    output;
  end;
run;

data _null_;
  set HAVE;
  by ID;
  array ARR A1--A500;                        * index variables into an array;
  if first.ID & ^last.ID then do I=1 to 500; * duplicate found, fill lag values;
    ARR[I]=lag(ARR[I]);
  end;
  if ^first.ID;                              * keep repeated observation;
  do I=1 to dim(ARR);                        * seek different value(s)  ;
    if ARR[I] ne lag(ARR[I]) then putlog _N_= ID= ARR[I]=;
  end;
run;

_N_=6 ID=5 A55=2

 

Replace

array ARR A1--A500;

with

array ARR 2 variable names from your table (the double dash makes that easy, look it up) ;

 


Untested, just a "thought experiment"...

 

Would:

 

proc sort data=foo out=bar dupout=dups noduprecs;
   by key1 key2 key3 _all_;
run;

also work?


Please post your question as a self-contained data step in the form of "have" (source) and "want" (desired results).
I won't contribute to your post if I can't cut-and-paste your syntactically correct code into SAS.
ChrisNZ
Tourmaline | Level 20

@ScottBass 

It should work, but proc sort is expensive, and the question is specifically to find which variables cause the duplication, since there are hundreds to review.

PS sorry about not replying, will reply soon. 🙂

ScottBass
Rhodochrosite | Level 12

@jimbobob wrote:

I have a dataset that is supposed to be unique by my variable called arrangement_key, however on occasion its not unique, that's easy to identify the records causing duplication, but scanning all the variables in those duplicate records trying to find the culprit variable(s) is a pain, my dataset has 500 columns, is there an easy way to spit out a dataset that contains the culprit variable(s) that are causing the duplication? thank you in advance for your help.


You don't say what process is causing the duplicate rows.  Is it a single table you're concerned about or the result of an SQL join (undesired Cartesian Product) across two or more tables?  If you KNOW that your table has non-unique keys, and is participating in a join, you should address this first.

 

Here is SQL code that will identify duplicate keys, although it is functionally equivalent to SAS code previously posted:

 

 

proc sql;
   create table want as
   select key1, key2, key3, count(0) as count
   from have
   group by key1, key2, key3
   having count(0) > 1;
quit;

 

You can filter on the count column, looking for rows with > 2, > 3, > 4, etc. duplicates.

 

At this point I usually just "eyeball" the data, scrolling right and skimming for satellite (non-key) columns where the data changes.

 

But, again, your first step is to ensure your source table keys are, in fact, keys.  https://www.google.com/search?q=what+is+a+key+column

 


Please post your question as a self-contained data step in the form of "have" (source) and "want" (desired results).
I won't contribute to your post if I can't cut-and-paste your syntactically correct code into SAS.
Kurt_Bremser
Super User

You only need to scan for duplicates of the key variable(s), the others are irrelevant for the join.

Please post the complete log of the code that causes the duplicates; use the </> button to post it.

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 11 replies
  • 2099 views
  • 1 like
  • 7 in conversation