Hi,
I am using SAS and have been running my code up till now just fine. Tonight I got an error message saying (below) when doing a proc sql. I have looked at previous answers to questions on this topic and have troubleshooted them using that concept but it still doesn't work. Do you have any other suggestions?
NOTE: The query requires remerging summary statistics back with the original data.
ERROR: Sort execution failure.
Have you tried using Google to find a solution?
Search-query: site:sas.com proc sql Sort execution failure
First link: https://communities.sas.com/t5/SAS-Procedures/ERROR-Sort-execution-failure-in-PROC-SQL/td-p/262581
Looks like you ran out of space because of data growth. Optimize your SQL code, or replace it with data/sort steps, if such is possible,
This is an old thread but I wanted to add my recent experience with the "Sort Execution Failure" error.
First this troubleshooting page should be read;
http://support.sas.com/kb/39/705.html
In my recent experience with this error the problem appeared to be that there was insufficient disk space/storage, although there was ample disk available for the failing process (200gb).
After trying various remedies we eventually identified that there was a user disk quota (of 75gb). Our job was executing as part of a batch of jobs on Unix, all being executed as the same user ('sasbatch'). The user disk quota is total usage for that user and so even though at a high level there was 200gb available, our user had reached it's quota and so the job was failing.
After increasing the users disk quota all of our batch jobs were able to complete successfully.
Hope this helps anyone who may come across the same situation.
@Seb_Royer wrote:
This is an old thread but I wanted to add my recent experience with the "Sort Execution Failure" error.
First this troubleshooting page should be read;
http://support.sas.com/kb/39/705.html
In my recent experience with this error the problem appeared to be that there was insufficient disk space/storage, although there was ample disk available for the failing process (200gb).
After trying various remedies we eventually identified that there was a user disk quota (of 75gb). Our job was executing as part of a batch of jobs on Unix, all being executed as the same user ('sasbatch'). The user disk quota is total usage for that user and so even though at a high level there was 200gb available, our user had reached it's quota and so the job was failing.
After increasing the users disk quota all of our batch jobs were able to complete successfully.
Hope this helps anyone who may come across the same situation.
I'd even go so far as not having any disk quota for userid's used to run batch jobs from the scheduler.
If I were a sysadmin, I'm not sure I'd want no quota on userid BATCH.
But it shouldn't be too hard for a sysadmin to set up a number of userid's (BATCH1, BATCH2, ...) each with a quota that would not be reduced by concurrent disk usage by other userid's. Let the op system dynamically allocated batch userid's to batch submissions, so users don't have to choose.
Of course, you'd still be at risk of multiple simultaneous demands for disk space exceeding overall disk availability. But at least you'd avoid the synthetic disk shortage created by sharing a quota well short of actual disk space.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.