BookmarkSubscribeRSS Feed
☑ This topic is solved. Need further help from the community? Please sign in and ask a new question.
Ricardo96
Fluorite | Level 6

Good Afternoon Everyone, 

My problem is as follows:

i have a workbook with no unique identifiers and data that appears to be duplicates but the cleint requested that the information be kept as is and that the information we see within the spreadsheet is not a duplicate. 

 

I have assigned unique identifiers to the records using COUNT() but where the problem comes in is that when we receive new files from the client, The new files could have the exact same information as the previous workbook with one or two new records. How do i keep only the new records from the new file so that i can append it to my existing dataset and remove the duplicates from the new data because they are duplicates from the previous workbook and i do not want that in my dataset while still assigning my unique identifiers to the records.

 

Please find attached an example of what the Spreadsheet looks like:

 

So i want to keep both instances of handy inc and assign unique identifiers for both records. 

but if these same records appear in a newly sent spreadhseet by a client that those rows be excluded and only the new records be kept for appending. 

 

Thank you in advance for all assistance provided 

 

Comm Example.PNG

1 ACCEPTED SOLUTION

Accepted Solutions
Kurt_Bremser
Super User

UNION in SQL checks for duplicates on its own:

proc sql;
create table new_dataset as
  select * from old_dataset
  union
  select * from update_dataset
;
quit;

Note that this involves an internal sort over all variables, so it can become very resource-intensive with larger data.

OTOH, spreadsheets from Excel are not "large" as they max out at ~ 1 million rows.

View solution in original post

3 REPLIES 3
Kurt_Bremser
Super User

UNION in SQL checks for duplicates on its own:

proc sql;
create table new_dataset as
  select * from old_dataset
  union
  select * from update_dataset
;
quit;

Note that this involves an internal sort over all variables, so it can become very resource-intensive with larger data.

OTOH, spreadsheets from Excel are not "large" as they max out at ~ 1 million rows.

Ricardo96
Fluorite | Level 6

Thank you Very much for the assistance on this problem...huge help.  

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 3 replies
  • 614 views
  • 1 like
  • 2 in conversation