BookmarkSubscribeRSS Feed
☑ This topic is solved. Need further help from the community? Please sign in and ask a new question.
bhaskarkothavt
Obsidian | Level 7
I am seeking urgent help as the issue impacting all SAS Studio users on our Viya 4 environment.  Since 9:30AM EST today, users have been encountering an error message stating "ERROR: Insufficient disk space is full, or quota has been exceeded." or "ERROR: Insufficient space in file WORK.BDXFS124.DATA." Unable to run even simple steps like proc contents, or reading data from sashelp to work directory and this issue is causing significant disruption to our daily operations, as it prevents users from performing any task in sas studio and sometime we noticed "sas studio compute context" is keep on disconnecting.
 
Our SAS administrator verified the status of pods, and all are working fine and we are unable to identify what caused the issue all of a sudden. Unfortunately, the impact is even more critical as we have fully transitioned to SAS Studio and do not have access to Base SAS at this time and we depend on sas studio for our daily operatoins.
 
Given the severity of the problem and its impact on our daily operations, any guidance or solution you can provide would be greatly appreciated.
 
Please let me know if there are any immediate steps we can take to free up disk space or identify the cause of the quota being exceeded.  We are available to discuss this matter further at your earliest convenience.

Sincerely, 

Bhaskar

1 ACCEPTED SOLUTION

Accepted Solutions
gwootton
SAS Super FREQ
1. By default, SAS will store output in the current working directory. SAS Studio runs initialization code that changes the current working directory to the WORK path to ensure it is writable. In code that generates output to a file, you can typically define where to send it.
2. When an individual SAS session terminates gracefully, it will remove it's WORK path. If the SAS session ends in such a way that this cannot be done, the WORK location remains and must be cleaned up manually.
3. No, this would need to be done outside of Viya.
4. In the WORK path typically you'd have a directory for each compute session and within that a WORK directory, a UTILLOC directory, and any output files. These would each be owned by the user who owns that compute session and only writable by them, so any cleanup activity would need to be done as root.

By default, WORK is set to an emptyDir volume which is a temporary volume tied to the pod, so regardless of how the pod ends it would be removed, however this can be undesirable because it uses the node's local storage so could risk bringing down the node if no node-level disk quota is in place.
The transformer I mentioned allows you to change WORK to use any Kubernetes volume definition, so this could be a local path on the nodes (which would be lost if the node was reprovisioned by the cloud provider, and could be a different storage device than the root), an NFS share or permanent PVC (where no automated cleanup would occur), or a generic ephemeral volume (a PVC that gets deleted when the pod is removed).
--
Greg Wootton | Principal Systems Technical Support Engineer

View solution in original post

9 REPLIES 9
antonbcristina
SAS Employee

@bhaskarkothavt, I strongly recommend contacting Tech Support https://support.sas.com/en/technical-support.html#contact as soon as possible, if you haven't done so already. They would be able to help with this and escalate the issue accordingly.   

bhaskarkothavt
Obsidian | Level 7

Yes, I raised sas tech support ticket but no response yet.

Best Regards,

Bhaskar

antonbcristina
SAS Employee

The support site mentions reporting critical problems by phone. Give them a call or contact them through the chat for a quicker resolution.

SASKiwi
PROC Star

The error you are getting indicates you are running low on WORK disk storage. You can locate where that is in SAS Studio by running this - if you are able:

proc options option = work;
run;
Ksharp
Super User
It is none of SAS business. It is about OS quota for each user.
If your SAS is installed under UNIX/AIX/LINUX os ,talk to your OS Admin to make all the user have more/unlimited QUOTA .
gwootton
SAS Super FREQ
As others have pointed out:
1. For urgent issues you should engage SAS Technical Support by phone.
2. This is an issue of you running out of disk space in your WORK location. By default in Viya 4 this is the node's local file system, but this could be reconfigured to an NFS share or a PVC. Whatever it is, it's running out of space. You can use the transformer:
$deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml
to configure the volume used for WORK by default.
--
Greg Wootton | Principal Systems Technical Support Engineer
bhaskarkothavt
Obsidian | Level 7
This is very helpful. Thank you.

I do have some questions:
1. Why are the results files saved in the saswork folder, instead of users’ home/data?
2. When we reboot sas viya, the saswork folder won’t be emptied? I thought work directory is a temporary folder, and gets wiped when we logout or reboot?
3. Is there a way to clean up those files from within Viya? I don’t see those files from work library in SAS Studio.
4. There are some directories in /saswork folder that we don’t get permission. Are those system created folders? Should we leave them alone or can we delete them?
gwootton
SAS Super FREQ
1. By default, SAS will store output in the current working directory. SAS Studio runs initialization code that changes the current working directory to the WORK path to ensure it is writable. In code that generates output to a file, you can typically define where to send it.
2. When an individual SAS session terminates gracefully, it will remove it's WORK path. If the SAS session ends in such a way that this cannot be done, the WORK location remains and must be cleaned up manually.
3. No, this would need to be done outside of Viya.
4. In the WORK path typically you'd have a directory for each compute session and within that a WORK directory, a UTILLOC directory, and any output files. These would each be owned by the user who owns that compute session and only writable by them, so any cleanup activity would need to be done as root.

By default, WORK is set to an emptyDir volume which is a temporary volume tied to the pod, so regardless of how the pod ends it would be removed, however this can be undesirable because it uses the node's local storage so could risk bringing down the node if no node-level disk quota is in place.
The transformer I mentioned allows you to change WORK to use any Kubernetes volume definition, so this could be a local path on the nodes (which would be lost if the node was reprovisioned by the cloud provider, and could be a different storage device than the root), an NFS share or permanent PVC (where no automated cleanup would occur), or a generic ephemeral volume (a PVC that gets deleted when the pod is removed).
--
Greg Wootton | Principal Systems Technical Support Engineer
bhaskarkothavt
Obsidian | Level 7
Yes, we are unaware this and thanks we implemented a schedule to clean this directory periodically.