10-13-2017 11:07 AM
Sorting using proc sort or proc sql using distinct on 18 variables and 55 million records is failing on a windows PC. How to sort the dataset?
Operating System: WX64_WKS.
10-13-2017 11:22 AM
Hi Paige Miller,
ERROR: An I/O error has occurred on file WORK.'SASTMP-000000030'n.UTILITY. ERROR: File WORK.'SASTMP-000000030'n.UTILITY is damaged. I/O processing did not complete.
10-13-2017 11:13 AM
Best if you can share your code and the log. And using "distinct"? Does that mean that you expect to have < 55 million records at the end? What's your end goal -- summarization? Subset of distinct records? Or just a no-duplicate-key situation? There are lots of ways to get there -- we just need to know what you're after.
10-13-2017 11:16 AM
55 million records * 18 variables is quite a large dataset, are you running out of resources/temporary area.
Perhaps consider your problem and use techniques associated with big data. SQL can be quite resource hungry writing to temporary files and doing other things behind the scenes. Proc sort should be better in this respect, but sorting 55mil records could be an issue.
10-13-2017 01:37 PM
@SASPhile I don't think TAGSORT will be of much benefit if the sort is BY _ALL_. I suspect it save disk space only if the by vars are a small subset of all vars.
10-13-2017 04:38 PM
proc sort data=mylib.dsn tagsort nodupkey out=target.dsn. dupout=target.dsn_dups;
From the documentation:
When the total length of BY variables is small compared with the record length, TAGSORT reduces temporary disk usage considerably. However, processing time might be much higher.
And since you are not using any subset of the variables for your by variables you are adding a complexity that likely isn't improving performance at all.
10-13-2017 01:24 PM
I take it your data are not already sorted (even with duplicates). Otherwise a simple DATA step could de-dupe. But does your data have any subset of variables (say up to 5 out of 30 vars) that are the most discriminating? Even if you have only an ID var, for which (say) only 5% of the ID's have duplicates.
In another context, I'm working on a solution using hash objects that can severely reduce memory requirements while removing duplicates. I'd suggest it if your situations seems analogous.
10-16-2017 01:41 PM
Deduplication of rows in a very large SAS data set turns out to be easy to do with a hash object. Difficulties arise when deciding on how to deduplicate.
The DISTINCT operator in SAS SQL eliminates rows that have exactly the same column variable values. SORT NODUPKEY keeps on the first row among multiple rows with the same key value(s). The hash object allows one to define a key of one or more column variables, look it up on a hash index as a Data step reads each row of a SAS Data step, and decide how to write or suppress key duplicates. As a safeguard, it often makes sense to write the rows with the first instance of a key value to a main data set and rows with subsequent values of that same key value to a data set of excluded duplicates.
How do you prefer to handle the duplicates? Note that if one is eliminating key duplicates, the duplicates may contain data not found in the row being kept.