12-04-2014 10:01 AM
I am grouping by ID1 and ID2 but Var1 and Var2 contain defect data.
I would like to correct as shown on the WANT
(Var1 and Var2 values may be ubder different ID1 and ID2)
Thanking you in advance
12-04-2014 10:09 AM
Could you clarify exactly, with code/test data what you are trying to do. It looks like the data in var_1 actually has more than one distinct, i.e. "green" is not the same as "green_c" so a grouping would still bring these out, however if you are after distincts on ID1/2 then do a pros sort nodupkey. Question then is what do you want to get back. You could also standardize your var1/2 data before grouping.
12-04-2014 10:15 AM
I am trying to "correct" Var1 and Var2
The "correct" pairs are (red, cold) (green hot) that should replace the "defect" ones
I am trying to solve that with proc sql find the Min and Max for each group by the proc sort the distinct cases and get the "dupout" dataset so I can remerge it back to the original and "correct" the values but the code becomes two long
I wonder whether there is a shortcut to that.
12-04-2014 10:20 AM
This should convert var1/2 to only have the first part of the string, the distinct will then remove any duplicates.
create table WANT as
scan(VAR1,1,"_") as VAR1,
scan(VAR2,1,"_") as VAR2
12-04-2014 12:33 PM
Are the distinct values you expect to see only green and red, hot or cold? if so:
if index(var1,"green")>0 then do;
var1 = 'green'; var2 = 'hot';
else if index(var1,"red")>0then do;
var1 = 'red; var2 = 'cold';
proc sort data=... nodupkey; by id1 id2; run;
12-04-2014 12:58 PM
For clean up, how about:
1) use proc sql to create a list of distinct values for Var1
2) use data step to detect and translate defective values, write the corresponding desired value to a new var
3) merge the translation table with the original table (by the original var1 values)
The result of merge should give you the original vars and plus your new alternative side by side. repeat for var2.
12-04-2014 01:13 PM
While there is no substitute for proper cleaning of defective data, a shortcut approach would be to take the most common combination of values to be the "correct" set and apply those to your data. There is no guarantee that this will be correct but the "errors" should be spread across the wide variety of ways it is possible to be wrong so the valid data is anticipated to be the most populous.
In your particular instance it mihgt be more appropriate to hande id1 and var1 separately from id2 and var2 but this might be food for thought:
proc sql noprint;
/*First create distinct combinations of id1, id2, var1 and var2*/
as select distinct
id1, id2, var1, var2, count(*) as counter
/*Then select the most popular combination (on the assumption that this is the one most likely to be correct)*/
id1, id2, var1, var2
/*Finally join the original dataset back to the "best" grouping*/
h.id1, h.id2, h.Transaction_Date,
work.have as h,
work.best_group as bg
h.id1 = bg.id1
h.id2 = bg.id2