BookmarkSubscribeRSS Feed
tediest
Calcite | Level 5

Hi guys. I have a variable recording the titles of publications and I need to remove duplicate observations from this variable and create a list of unique titles. Due to misspelling and improper citation, many duplicate titles are not exactly the same. How to delete duplicates in this case?

I figured out a solution but it was soon proven to be problematic. This is what I did. First I created a Cartesian products of this variable. Next, I calculate edit distance between any pair and then determine which pair should be considered as the same. Finally, I only keep the matched pairs and this table tells me which observations are the same. However, there is a serious problem in this procedure which is illustrated by a simple example below. Suppose I have 4 observations in my data set. Going through the matching steps described above, I get the table below:

obs1 obs2  match_result

1     2          exact match

1     3          fuzzy match

1     5          exact match

2     5          exact match

3     4          fuzzy match

In this example, the comparison between 2 and 5 are unnecessary, since 1=2 and 2=5 are already established. Worse more, 3 is recognized as a fuzzy duplicate of 1, but later 4 become a fuzzy duplicate of 3 too. However, the edit distance between 1 and 4 are quite far and they are not matched by any means. In this example, I do not know how to categorize duplicate observation. Sure enough, 1,2 and 5 are duplicates. But how about 3 and 4? Therefore, I set another restriction.on this procedure, that once an observation is matched no matter it is fuzzy or exact, this observation should be not be used to match other observations. In this example above, after the first loop, 2, 3 and 5 should be excluded from further matching. I feel a series macro variables would be needed for this procedure and I do not know how to implement it. Also, do you think the last restriction I set is reasonable? Thanks  a lot!

6 REPLIES 6
art297
Opal | Level 21

Can you post a small sample of your data and the code you used?  That would make it a lot easier to evaluate.

tediest
Calcite | Level 5

here is the code I wrote for this problem and the sample data is attached.  Thanks!

%let maxscore=1000;

data title_deldup;

set title(rename=(title=title1 order=order1)) nobs=nobs1;

if _n_ = 1 then do;

    call compcost( 'fdelete=',200, 'finsert=',200, 'freplace=',100,

    'delete=',100, 'insert=',100, 'replace=',100,

    'append=',200, 'truncate=',200,

    'double=',20, 'single=',20, 'swap=',20,

    'blank=',10, 'punctuation=',10,

    'match=',0 );

end;

do i = 1 to nobs1;

    if _n_ < i then do;

        set title(rename=(title=title2 order=order2)) point=i;

        gedscore=compged(title1,title2,'iL' );

        if gedscore<&maxscore then output;

    end;

end;

run;

PGStats
Opal | Level 21

You could try the following procedure :

1) create a dataset PAIRS of all pairs (obs1, obs2) where obs1 < obs2 and the edit distance is less than a given value (an exact or fuzzy match)

2) use the macro %SubGraphs(PAIRS,from=obs1,to=obs2,out=CLUSTERS); the macro is given here :

3) look at the CLUSTERS dataset, the variable clust identifies each group of similar titles.

Good luck.

PG

PG
tediest
Calcite | Level 5

Hi PG thanks. I learned some basics about graph theory and sort of understand your macro. My only concern is than this macro treat every pair equally and I am thinking about the possibility of using edit distance as a weight between two nodes. I guess this will give me more precise subgraph.

PGStats
Opal | Level 21

If your PAIRS dataset contains variables obs1, obs2, and dist, the edit distance, you could merge the CLUSTERS dataset with PAIRS to get back the title pairs grouped by cluster as follows :

proc sql;

create table DISTANCES as

select C1.clust, P.obs1, P.obs2, P.dist

from PAIRS as P inner join

     CLUSTERS as C1 on C1.node=P.obs1 inner join

     CLUSTERS as C2 on C2.node=P.obs2 and C1.clust=C2.clust

order by C1.clust;

quit;

PG

PG
tediest
Calcite | Level 5

Hi PG, thanks!

Your subgraph macro did its job very well on my data. The only trick is that I need to set a smaller threshold for fuzzy match. When I did so, your macro will give a pretty clear-cut division. Thanks!

Ted

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 6 replies
  • 1657 views
  • 0 likes
  • 3 in conversation