11-10-2016 04:18 PM
hello friends, please advise....
we have sas 9.4 on linux with couple of nodes. we gave users a 1 tb temp work space - however - few specific nodes all the time running out of temp space where only few jobs running on host. those hosts specs as below
ncpus->10 | maxmem-> ~95000 mb | maxswp-> ~ 96000.
Also when those host running out of temp space - cpu utilization stays normal, ~ 30% only but pg and swp goes up around 200 and 90 gb respectively. mem goes around 80 gb as well...
just increasing temp space would resolve issue?
11-10-2016 06:38 PM
I would first look to see if there are differences in the tasks the users are doing. Possibly some of them need more space OR need to learn how to clean up temporary files.
11-11-2016 03:18 AM
Just throwing more resources at the issue will only be a stopgap measure at best in most cases. Add 1 TB on Monday, and your users will complain on Tuesday.
You need to identify where a certain process runs out of space, and look for the reasons. Often bad housekeeping in WORK, not using the compress option on datasets with long strings, improper use of SQL etc are the reasons for issues like this.
Optimizing the processes might very well remove the need for more resources, and make those processes run much faster.
For shared resources where a multitude of users write to, and a disk-full condition will be a showstopper, activating the quota system and setting reasonable quotas is a must. Then misbehaving users run out of space individually without causing troubles for the others. And their bad processes can be identified much easier.