09-07-2017 09:01 AM
Hello everyone, I have a database with lot of rubbish, soon we will change our database and I'd like to clean as much as possible the data before the migration.
I'm trying to identify possible duplicated records inside one dataset, which are not exact match but are similar.
For example :
Record 1 - Name=John, Surname=Doe, Address= Fake Street
Record 2 - Name=Jonh, Surname= Doe Joe, Address=F. Street
My idea is to create a unique string with name, surname, address, without spaces, (for example johndoefakestreet) and confront one to one all the record with all other record in the same dataset, (approximately 800k records) using compged function, and keep only the record with the smallest value in order to identify possible duplicates (which I know there are present).
I don't know how to perform this operation, or if there is an easiest way to do this.
I'm using sas 9.4, i hope it's clear what I'm trying to do
09-07-2017 09:12 AM
Do a proc freq on your data, this wy you will get a list of distinct values and how many times that value appears. You can then use that output to see where cleaning can be done. Iterate that process and as the list decreases it should go quicker.
09-07-2017 10:35 AM
Combining everything into one field may introduce more problems than you think.
When I have a project like this I use a free tool developed by the CDC call LinkPlus which is a probabalistic matching program available at https://www.cdc.gov/cancer/npcr/tools/registryplus/lp_tech_info.htm
The program returns a probability of a match and indicators of which records it may match.
09-07-2017 10:52 AM
I'll second @ballardw suggestion.
Otherwise for fuzzy matches look at:
SPEDIS, COMPGED type functions as well as this post that has a good SQL example of doing this type of multiple match in a semi-brute force method.