09-07-2012 05:18 PM
Hi guys. I have a variable recording the titles of publications and I need to remove duplicate observations from this variable and create a list of unique titles. Due to misspelling and improper citation, many duplicate titles are not exactly the same. How to delete duplicates in this case?
I figured out a solution but it was soon proven to be problematic. This is what I did. First I created a Cartesian products of this variable. Next, I calculate edit distance between any pair and then determine which pair should be considered as the same. Finally, I only keep the matched pairs and this table tells me which observations are the same. However, there is a serious problem in this procedure which is illustrated by a simple example below. Suppose I have 4 observations in my data set. Going through the matching steps described above, I get the table below:
obs1 obs2 match_result
1 2 exact match
1 3 fuzzy match
1 5 exact match
2 5 exact match
3 4 fuzzy match
In this example, the comparison between 2 and 5 are unnecessary, since 1=2 and 2=5 are already established. Worse more, 3 is recognized as a fuzzy duplicate of 1, but later 4 become a fuzzy duplicate of 3 too. However, the edit distance between 1 and 4 are quite far and they are not matched by any means. In this example, I do not know how to categorize duplicate observation. Sure enough, 1,2 and 5 are duplicates. But how about 3 and 4? Therefore, I set another restriction.on this procedure, that once an observation is matched no matter it is fuzzy or exact, this observation should be not be used to match other observations. In this example above, after the first loop, 2, 3 and 5 should be excluded from further matching. I feel a series macro variables would be needed for this procedure and I do not know how to implement it. Also, do you think the last restriction I set is reasonable? Thanks a lot!
09-07-2012 09:52 PM
here is the code I wrote for this problem and the sample data is attached. Thanks!
set title(rename=(title=title1 order=order1)) nobs=nobs1;
if _n_ = 1 then do;
call compcost( 'fdelete=',200, 'finsert=',200, 'freplace=',100,
'delete=',100, 'insert=',100, 'replace=',100,
'double=',20, 'single=',20, 'swap=',20,
do i = 1 to nobs1;
if _n_ < i then do;
set title(rename=(title=title2 order=order2)) point=i;
if gedscore<&maxscore then output;
09-07-2012 08:43 PM
You could try the following procedure :
1) create a dataset PAIRS of all pairs (obs1, obs2) where obs1 < obs2 and the edit distance is less than a given value (an exact or fuzzy match)
3) look at the CLUSTERS dataset, the variable clust identifies each group of similar titles.
09-07-2012 10:12 PM
Hi PG thanks. I learned some basics about graph theory and sort of understand your macro. My only concern is than this macro treat every pair equally and I am thinking about the possibility of using edit distance as a weight between two nodes. I guess this will give me more precise subgraph.
09-07-2012 10:40 PM
If your PAIRS dataset contains variables obs1, obs2, and dist, the edit distance, you could merge the CLUSTERS dataset with PAIRS to get back the title pairs grouped by cluster as follows :
create table DISTANCES as
select C1.clust, P.obs1, P.obs2, P.dist
from PAIRS as P inner join
CLUSTERS as C1 on C1.node=P.obs1 inner join
CLUSTERS as C2 on C2.node=P.obs2 and C1.clust=C2.clust
order by C1.clust;
09-09-2012 09:27 PM
Hi PG, thanks!
Your subgraph macro did its job very well on my data. The only trick is that I need to set a smaller threshold for fuzzy match. When I did so, your macro will give a pretty clear-cut division. Thanks!