Sorting using proc sort or proc sql using distinct on 18 variables and 55 million records is failing on a windows PC. How to sort the dataset?
Version:9.04.01M3P062415
Operating System: WX64_WKS.
"Failing"?
Please be specific, give us the details.
Hi Paige Miller,
ERROR: An I/O error has occurred on file WORK.'SASTMP-000000030'n.UTILITY. ERROR: File WORK.'SASTMP-000000030'n.UTILITY is damaged. I/O processing did not complete.
Best if you can share your code and the log. And using "distinct"? Does that mean that you expect to have < 55 million records at the end? What's your end goal -- summarization? Subset of distinct records? Or just a no-duplicate-key situation? There are lots of ways to get there -- we just need to know what you're after.
to get to no duplicate situation
55 million records * 18 variables is quite a large dataset, are you running out of resources/temporary area.
Perhaps consider your problem and use techniques associated with big data. SQL can be quite resource hungry writing to temporary files and doing other things behind the scenes. Proc sort should be better in this respect, but sorting 55mil records could be an issue.
Show the code that you ran for Proc Sort.
proc sort data=mylib.dsn tagsort nodupkey out=target.dsn. dupout=target.dsn_dups;
by _all_;
run;
@SASPhile I don't think TAGSORT will be of much benefit if the sort is BY _ALL_. I suspect it save disk space only if the by vars are a small subset of all vars.
@SASPhile wrote:
proc sort data=mylib.dsn tagsort nodupkey out=target.dsn. dupout=target.dsn_dups;
by _all_;
run;
From the documentation:
When the total length of BY variables is small compared with the record length, TAGSORT reduces temporary disk usage considerably. However, processing time might be much higher.
And since you are not using any subset of the variables for your by variables you are adding a complexity that likely isn't improving performance at all.
I take it your data are not already sorted (even with duplicates). Otherwise a simple DATA step could de-dupe. But does your data have any subset of variables (say up to 5 out of 30 vars) that are the most discriminating? Even if you have only an ID var, for which (say) only 5% of the ID's have duplicates.
In another context, I'm working on a solution using hash objects that can severely reduce memory requirements while removing duplicates. I'd suggest it if your situations seems analogous.
Deduplication of rows in a very large SAS data set turns out to be easy to do with a hash object. Difficulties arise when deciding on how to deduplicate.
The DISTINCT operator in SAS SQL eliminates rows that have exactly the same column variable values. SORT NODUPKEY keeps on the first row among multiple rows with the same key value(s). The hash object allows one to define a key of one or more column variables, look it up on a hash index as a Data step reads each row of a SAS Data step, and decide how to write or suppress key duplicates. As a safeguard, it often makes sense to write the rows with the first instance of a key value to a main data set and rows with subsequent values of that same key value to a data set of excluded duplicates.
How do you prefer to handle the duplicates? Note that if one is eliminating key duplicates, the duplicates may contain data not found in the row being kept.
S
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.