Hello, everyone
I have two datasets to merge using a common string variable –customer name.
The main data (customer data) contains firmid year and its corresponding sales to each of its customers and customer id, name. each firm could have multiple customers in each year. This is a panel dataset with 455,000 records.
supplierid | supplier | customername | salestocustomer | fyear |
1797 | MM corp | Enserco Energy Inc | 20 | 2006 |
1797 | MM corp | Calpine Corp | 30 | 2006 |
1891 | Hilton Inc | International Business Machi | 40 | 2006 |
1891 | Hilton Inc | Xilinx Inc | 50 | 2006 |
1001 | Alcoa | X Incorporated | 20 | 1990 |
1001 | Alcoa | Sumblet corp | 30 | 1990 |
The firm data : this dataset contains all U.S. publically traded firms between 1972-2012, their names, year, and accounting data. If the firmname in this dataset is considered close enough to customername in maindata set, I want to join these two datasets together.This one has 256,000 observations, among which 24,000 unique firmnames (note: each firmname could appear in multiple years).
firmid | firmname | xvar |
1291 | Enserco Energy Incorporated | 0.1 |
1081 | Calpine | 1 |
1123 | Xilin corp | 110 |
My plan is :
My concern is : given the number of records of the datasets, proc sql does a full Cartesian join, that will be over several GB, it froze my computer each time.
Can someone comment on :
If you could show a sample code, that will be great !
Thank you !
Lan
Take a look at the approach I used for the following presentation: Expert Panel Solution MWSUG 2013-Tabachneck - sasCommunity
particularly the part about adjusting company names.
The approach requires creating a file of unique company names, and then using compged (as I recall) to first clean up the company names before doing any joins.
Take a look at the approach I used for the following presentation: Expert Panel Solution MWSUG 2013-Tabachneck - sasCommunity
particularly the part about adjusting company names.
The approach requires creating a file of unique company names, and then using compged (as I recall) to first clean up the company names before doing any joins.
Thank you Arthur !
yes you used compged ,
Do you have a sample data for which you presented the materials? I found your code starting on page 24, but I do not completely follow what you did.
Best,
Lan
Lan,
Better than sample data, if you click on the link in the paper's first paragraph (problem statement), you can download full copies of all of the datasets that were used.
Thank you Art!
I downloaded the paper and the zipped data. I did the first two steps following your paper: 1. create a unique set of banknames using customer data , 2. get number of records in bank info dataset
By page 6 of the paper, I could not quite follow the block of code due to my insufficient knowledge of sas. I pasted them here:
let me call the sequence of code below (block 1), so I can refer back to it later.
data fmtDataset (keep=fmtname start label type);
retain fmtname '$banks' type 'C';
array bank(&numrec) $57;
do i=1 to &numrec;
set bankinfo;
bank(i)=BankName;
end;
do until (eof);
set banks (rename=(BankName=start)) end=eof;
if length(start) le 4 then label=start;
else do; lowscore=5000;
do i=1 to &numrec;
score= compged(start,bank(i));
if score le lowscore then do;
lowscore=score; closest=i;
end;
end;
label=bank(closest);
end;
output;
end;
run;
1. I used compged in the past, but the score I set is generally low to ensure the high matching, your code has a line:
lowscore=5000;
I do not know if it means you allow very distant match, i.e., two names are not close match.
2. my data set has company names in two databases, some are easier to match
e.g. AB INC. vs. AB INCORPORATED, i could use SAS Code: &name = tranwrd(&name, "INCORPORATED","INC"); to account for those,
but I notice you use
/*Create the necessary format*/
proc format cntlin=fmtDataset;
run;
/*recode bank names*/
data dcandh;
set dcandh (rename=(BankName=_BankName));
BankName=put(_BankName,$banks.);
run;
my question is: should I use block 1 code for my case, my two data sets are : data one has a group of firms that are customers of other firms, data two has all publicly traded firms in the U.S. market ; the two data source could be using different abbreviations such as Inc. Corp, spelled out or not, ; lower case , upper case, for name spelling, accounting for these, I can make two datasets both lower cases, and spell out some abbreviations I can think of. Besides these, should i use your block 1 to build the format?
Sincerely,
Lan
Lan,
The 5000 was just an arbitrary starting point. The code matches all of the records in one file, with all of the records in the other file, and selects the closest match.
I created a format because one of the files had a million records in it, but only about 10000 unique bank names. Thus, rather than match all million records, I first eliminated all duplicates, ran the code against the file of unique bank names, created a format, and applied the format to the full non-duplicated file.
You likely won't need to create a format, but could just modify block 1 so that does the entire job.
As for the score, you could always check the scores at the end of the process to see if any large scores resulted. Obviously, if you do end up with any large scores, adequate matches weren't found for those records.
Thank you Art!
1. I do not know how to modify block 1 code (not using format).
2. you wrote "you could always check the scores at the end of the process to see if any large scores resulted. Obviously, if you do end up with any large scores, adequate matches weren't found for those records.", how do I do that in your block 1 ?
Best,
Lan
It's been a while since I've looked at that code, thus I'm a bit rusty.
I'm re-running it now and, after looking at it, I've changed my mind: building and using the format would probably be the easiest approach.
To answer your question, in block one just change the line:
data fmtDataset (keep=fmtname start label type);
to
data fmtDataset (keep=fmtname start label type lowscore);
I just re-ran the code. It turns out that it was matching 33,514 records with 10,000 bank names. Depending upon your system's processing speed that could take between 30 minutes and almost 3 hours.
The highest compged score it produced was 860 which, in this case, was quite satisfactory. The result of lowscores 860 included the following matches:
GIANR FARM OFFICE OF CLE=GIANT FARM SAVINGS OFFICE OF CLEVELAND
GIANT FARM OFFICE OF CLW=GIANT FARM SAVINGS OFFICE OF CLEVELAND
GIANY FARM OFFICE OF CLE=GIANT FARM SAVINGS OFFICE OF CLEVELAND
The next highest score was 840 and it was assigned to:
SOUTHWEST MILITSRY OF WAS=SOUTHWEST MILITARY SAVINGS OF WASHINGTON
Thank you Art!
I plan to run your code as soon as I finish another program. In the mean time, I looked at the bankname in bankinfo and DCANDH data, the spelling are not subject to issues/inconsistencies such as lowercase, uppercase unmatch, or Limited vs. Ltd, or Corp vs. Co.
my case is slightly different, my datatwo is a larger set of firm names, it could overlap with dataone in firm names, however, the spelling could be inconsistent, (such as such as lowercase, uppercase unmatch, or Limited vs. Ltd, or Corp vs. Co. , but not limited to these).
I plan to apply your block 1 to my data - if I could make it work .
If you think of other things I should clean up data before the proc sql match, please let me know.
Best,
Lan
Apply the Upcase function to the name variables in each file before attempting to run block 1.
Hi Art,
Your block 1 code works very well for my small sample testing. I am moving on to large sample and will keep you posted on my progress.
Thanks a bunch !
Lan
One thing to keep in mind. Prior to running the block 1 code, use a proc sort, by comany_name (or whatever the field is called) using the nodupkey option, and outputting a new file (e.g., companies) .. to use in the block 1 code.
You only need to create the format for each variant of a company's name and the reason for creating the format is to then apply it to the full file.
Yes, Art. I did proc sort nodupkey for both of my dataset, I think your bankinfo data has only 1 name per bank , so you did not have to run nodupkey for that one, you did proc sort nodupkey for the customer theft data . Both of my datasets are panel, so my customer names could appear multiple times. Therefore I run proc sort nodupkey , then use the output datasets to run your block 1 code.
Lan
Hi Art,
As my earlier post indicated, I was able to run your code (with minor edits) successfully.
My matched sample has over 35000 records with the highest score upto 1070. About 2000 records have zero score, i.e., perfect match for names across two data sets. However, at very low scores, I still have names that are not good matches with visual inspection. e.g.
I have one additional question regarding the formatting.
CIMM INC and ICM INC is a pair, with lowscore 40
another example
ESD CO and EDS CORP is a pair, with lowscore 40
my question is should change formatting or other aspects of your posted code (Expert Panel Solution MWSUG 2013-Tabachneck - sasCommunity)
to enhance the match results. As it stands, I have 33,000 records that have non-zero score, and it is time consuming to do visual inspection for each pair.
For those that are interested, code starts on the bottom of page 5.
http://www.sascommunity.org/mwiki/images/f/f8/Expert_Panel_Tabachneck_MWSUG_2013.pdf
Thanks !
Lan
Build your skills. Make connections. Enjoy creative freedom. Maybe change the world. Registration is now open through August 30th. Visit the SAS Hackathon homepage.
Register today!Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.
Find more tutorials on the SAS Users YouTube channel.