DATA Step, Macro, Functions and more

proc sort/proc sql failing

Reply
Super Contributor
Posts: 717

proc sort/proc sql failing

Sorting using proc sort or proc sql using distinct on 18 variables and 55 million records is failing on a windows PC. How to sort the dataset?

Version:9.04.01M3P062415
Operating System:   WX64_WKS.

 

Respected Advisor
Posts: 3,279

Re: proc sort/proc sql failing

"Failing"?

 

Please be specific, give us the details.

--
Paige Miller
Super Contributor
Posts: 717

Re: proc sort/proc sql failing

Posted in reply to PaigeMiller

Hi Paige Miller,
ERROR: An I/O error has occurred on file WORK.'SASTMP-000000030'n.UTILITY. ERROR: File WORK.'SASTMP-000000030'n.UTILITY is damaged. I/O processing did not complete.

Community Manager
Posts: 3,463

Re: proc sort/proc sql failing

Best if you can share your code and the log.  And using "distinct"? Does that mean that you expect to have < 55 million records at the end?  What's your end goal -- summarization? Subset of distinct records?  Or just a no-duplicate-key situation?  There are lots of ways to get there -- we just need to know what you're after.

Super Contributor
Posts: 717

Re: proc sort/proc sql failing

Posted in reply to ChrisHemedinger

to get to no duplicate situation

Super User
Super User
Posts: 9,853

Re: proc sort/proc sql failing

55 million records * 18 variables is quite a large dataset, are you running out of resources/temporary area.

Perhaps consider your problem and use techniques associated with big data.  SQL can be quite resource hungry writing to temporary files and doing other things behind the scenes.  Proc sort should be better in this respect, but sorting 55mil records could be an issue. 

Super User
Posts: 13,941

Re: proc sort/proc sql failing

Show the code that you ran for Proc Sort.

Super Contributor
Posts: 717

Re: proc sort/proc sql failing

proc sort data=mylib.dsn tagsort nodupkey out=target.dsn. dupout=target.dsn_dups;
  by  _all_;
run;

Trusted Advisor
Posts: 1,394

Re: proc sort/proc sql failing

@SASPhile  I don't think TAGSORT will be of much benefit if the sort is BY _ALL_.    I suspect it save disk space only if the by vars are a small subset of all vars.

Super User
Posts: 13,941

Re: proc sort/proc sql failing


SASPhile wrote:

proc sort data=mylib.dsn tagsort nodupkey out=target.dsn. dupout=target.dsn_dups;
  by  _all_;
run;


From the documentation:

When the total length of BY variables is small compared with the record length, TAGSORT reduces temporary disk usage considerably. However, processing time might be much higher.

 

And since you are not using any subset of the variables for your by variables you are adding a complexity that likely isn't improving performance at all.

 

 

Trusted Advisor
Posts: 1,394

Re: proc sort/proc sql failing

I take it your data are not already sorted (even with duplicates).  Otherwise a simple DATA step could de-dupe.  But does your data have any subset of variables (say up to 5 out of 30 vars) that are the most discriminating?  Even if you have only an ID var, for which (say) only 5% of the ID's have duplicates.

 

In another context, I'm working on a solution using hash objects that can severely reduce memory requirements while removing duplicates.  I'd suggest it if your situations seems analogous.

Occasional Contributor _s_
Occasional Contributor
Posts: 5

Re: proc sort/proc sql failing

Deduplication of rows in a very large SAS data set turns out to be easy to do with a hash object. Difficulties arise when deciding on how to deduplicate. 

 

The DISTINCT operator in SAS SQL eliminates rows that have exactly the same column variable values. SORT NODUPKEY keeps on the first row among multiple rows with the same key value(s). The hash object  allows one to define a key of one or more column variables, look it up on a hash index as a Data step reads each row of a SAS Data step, and decide how to write or suppress key duplicates. As a safeguard, it often makes sense to write the rows with the first instance of a key value to a main data set and rows with subsequent values of that same key value to a data set of excluded duplicates.

 

How do you prefer to handle the duplicates? Note that if one is eliminating key duplicates, the duplicates may contain data not found in the row being kept.

S

 

 

Ask a Question
Discussion stats
  • 11 replies
  • 229 views
  • 0 likes
  • 7 in conversation