we have sas 9.4 on linux with couple of nodes. we gave users a 1 tb temp work space - however - few specific nodes all the time running out of temp space where only few jobs running on host. those hosts specs as below
Just throwing more resources at the issue will only be a stopgap measure at best in most cases. Add 1 TB on Monday, and your users will complain on Tuesday.
You need to identify where a certain process runs out of space, and look for the reasons. Often bad housekeeping in WORK, not using the compress option on datasets with long strings, improper use of SQL etc are the reasons for issues like this.
Optimizing the processes might very well remove the need for more resources, and make those processes run much faster.
For shared resources where a multitude of users write to, and a disk-full condition will be a showstopper, activating the quota system and setting reasonable quotas is a must. Then misbehaving users run out of space individually without causing troubles for the others. And their bad processes can be identified much easier.