BookmarkSubscribeRSS Feed
☑ This topic is solved. Need further help from the community? Please sign in and ask a new question.
Ricardo96
Fluorite | Level 6

Good Afternoon Everyone, 

My problem is as follows:

i have a workbook with no unique identifiers and data that appears to be duplicates but the cleint requested that the information be kept as is and that the information we see within the spreadsheet is not a duplicate. 

 

I have assigned unique identifiers to the records using COUNT() but where the problem comes in is that when we receive new files from the client, The new files could have the exact same information as the previous workbook with one or two new records. How do i keep only the new records from the new file so that i can append it to my existing dataset and remove the duplicates from the new data because they are duplicates from the previous workbook and i do not want that in my dataset while still assigning my unique identifiers to the records.

 

Please find attached an example of what the Spreadsheet looks like:

 

So i want to keep both instances of handy inc and assign unique identifiers for both records. 

but if these same records appear in a newly sent spreadhseet by a client that those rows be excluded and only the new records be kept for appending. 

 

Thank you in advance for all assistance provided 

 

Comm Example.PNG

1 ACCEPTED SOLUTION

Accepted Solutions
Kurt_Bremser
Super User

UNION in SQL checks for duplicates on its own:

proc sql;
create table new_dataset as
  select * from old_dataset
  union
  select * from update_dataset
;
quit;

Note that this involves an internal sort over all variables, so it can become very resource-intensive with larger data.

OTOH, spreadsheets from Excel are not "large" as they max out at ~ 1 million rows.

View solution in original post

3 REPLIES 3
Kurt_Bremser
Super User

UNION in SQL checks for duplicates on its own:

proc sql;
create table new_dataset as
  select * from old_dataset
  union
  select * from update_dataset
;
quit;

Note that this involves an internal sort over all variables, so it can become very resource-intensive with larger data.

OTOH, spreadsheets from Excel are not "large" as they max out at ~ 1 million rows.

Ricardo96
Fluorite | Level 6

Thank you Very much for the assistance on this problem...huge help.  

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 3 replies
  • 446 views
  • 1 like
  • 2 in conversation