we are using SAS (64bit) on a machine running SUN Solaris 10 (64bit).
If users are starting the execution of SAS code on the machine via ssh console e.g. "nohup sas <programname>.sas &", these processes have a long time in the state sleep (LCK).
The programs access flat files in UNIX filesystem for analysis. We were investigating but couldn't find a cause for that.
Does any of you have an idea, how we could optimize the execution e.g. with -memsize, -paging etc. or the SAS codes to boost the execution?
I don't have example code but could provide more information if required.
I can only say that I was already using SAS in a similar environment as you describe it and batch processing was no issue. Is this "sleep state" issue true for all SAS batch jobs or only for specific ones?
Do you know with what priority these processes run (are there other processes with higher priority around) and how many SAS processes can be run in parallel? Just thinking that it might be a "OS setup" issue for sas.exe stuff.
And if it's only SAS jobs accessing external files: Is this a ftp connection or something alike? And could there be a tight limitation on how many channels you can open in parallel.
It sounds to me very much like something a SysAdmin needs to look into.
If you can't resolve this issue within reasonable time then it might also be worth to contact SAS TechSupport.
nohup writes an output file, nohup.out, do you have the input SAS-files in one folder?
Your Solaris-admin should be able to tell you the reason of the locks. (prstat -mL, plockstat, mount options)
In the command line, together with the input file (-sysin), -noterminal, -nonews I would define logfiles (-log) and outputfiles (-print) , too.
SAS Help: Optimizing SAS I/O can give you some hints, too.
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.