BookmarkSubscribeRSS Feed

Some SASWORK storage options for SAS Viya on Kubernetes

Started ‎10-22-2022 by
Modified ‎10-22-2022 by
Views 10,234

Introduction

 

SAS compute sessions have always been heavily depending on good disk I/O performance and this requirement has not changed with the latest SAS platform. Seasoned SAS administrators might not be surprised to hear about this especially when knowing that the SAS compute engine in SAS Viya on Kubernetes is still based on a SAS 9.4 kernel. Choosing a suitable storage configuration for SASWORK usually makes a key difference in terms of compute performance because most SAS programs rely heavily on the WORK library for storing temporary data while executing their data processing steps.

 

In this blog I’d like to describe some common configuration options for SASWORK when running SAS Viya on the managed Kubernetes service (AKS) on the Azure cloud.

 

Storage options

 

In this blog I’d like to talk about 4 alternatives for configuring SASWORK storage:

 

  • emptyDir volumes,
  • hostPath mounts,
  • local ephemeral storage mounts and
  • RWX storage (“shared storage”)

The following picture tries to summarize these options:

 

storage1.jpg

Each option comes with benefits and disadvantages and in some cases the latter actually outweigh the former. Before we discuss these options in more detail, let’s briefly cover some basics about SAS compute sessions in SAS Viya on Kubernetes.

 

SAS compute sessions are Kubernetes Jobs which are launched on request – for example by users starting a session in SAS Studio, by running a pipeline in SAS Model Studio or by starting a batch job using the SAS Viya CLI. Like any other Kubernetes Job, SAS compute sessions “run to completion” which means that the compute pods will be terminated and removed once the SAS session is completed – for example when a user logs off from SAS Studio of if a batch job completes execution. The runtime configuration of these Jobs is specified by PodTemplates which (among many other things) also describe the storage configuration of the compute pods.

 

The SAS developers have taken precaution to enable SAS administrators to easily adjust the storage configuration to their needs. Actually, when taking a closer look at this configuration, you’ll notice that the SASWORK location is mounted from a persistent volume which will be mapped to the pod’s filesystem during the start of the session. The mount path of this volume has been carefully chosen to be a parent folder of the default location for SASWORK, which means that data in SASWORK will not be written to the pod’s (virtual) filesystem but to an externalized drive (or folder).

 

The following command shows some details of how the persistent volume is mounted to the SAS compute pod:

 

$ kubectl get podtemplates sas-launcher-job-config -n viya4 -o jsonpath="{.template.spec.containers[].volumeMounts}"

[ {
    "mountPath": "/opt/sas/viya/config/var",
    "name": "viya"
  }
(...) ]

 

The output above tells us that a persistent volume using the identifier “viya” will be mounted to the pod’s filesystem starting at /opt/sas/viya/config/var. The code snippet below will display the physical location of SASWORK (as seen from the pod’s “point of view”) when submitted from a SAS Studio session:

 

%let work_path=%sysfunc(pathname(work));
%put "SASWORK: &work_path";

"SASWORK:
/opt/sas/viya/config/var/tmp/compsrv/default/932f131e-...-f0bcce9ded4f/SAS_work1C5C000000B9_sas-launcher-9dfdf749-...-r4ltl"

 

So the SASWORK folder of this (and any other) SAS session is a subfolder of /opt/sas/viya/config/var and hence will be written to the “viya” persistent volume we just saw. Obviously this leads to some questions regarding this mysterious “viya” volume. It turns out that there is a default configuration in place which can (and should) be overwritten during the deployment to ensure that the optimal performance for SAS sessions is available.

 

This is where the different configuration options mentioned above can be applied. In other words: re-configuring SASWORK storage is done by re-defining the “viya” volume in the PodTemplates used by SAS compute sessions.

 

emptyDir

 

This is the default configuration if not changed during the deployment. Assuming you did not follow the instructions given in $deploy/sas-bases/examples/sas-programming-environment/storage/README.md, the SAS compute sessions will use emptyDir volumes for SASWORK.

 

It’s easy to check if you’re running with the default or not by examining another section of the PodTemplate (abbreviated output):

 

$ kubectl get podtemplates sas-launcher-job-config -n viya4 -o jsonpath="{.template.spec.volumes}"

[ {
    "emptyDir": {},
    "name": "viya"
  }
(...) ]

 

Which tells us that in this case the “viya” volume is of type “emptyDir”, i.e. it’s using the default configuration. It seems that emptyDir is the easiest approach, so why not stick with this option? The major disadvantages are that Kubernetes usually creates the emptyDir on the node’s operating system disk and also limits the capacity of this volume. The limit for emptyDir on AKS is 50GB and the Kubernetes scheduler will evict your session if you create more temporary data than this in your SAS session – simply speaking: your session will be terminated forcefully. Moreover, even other issues can show up because the node’s OS disk might be too small to host more than a few concurrent SAS sessions (even if each of them stays below the 50GB limit) and this will also cause errors.

 

Just as a side note: emptyDir does not need to be a created on the node’s disk actually. Following the definition of emptyDir on the Kubernetes documentation pages, these volumes can be created as RAM-backed filesystems as well:

 

Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to "Memory", Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead.

(https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)

 

While this can speed up the “I/O” of the volume it does not remove all of the disadvantages that come with the use of emptyDir of course. In short: running SAS Viya with the default setting is ok for small test environments, but it’s definitely not recommended for any serious workloads.

 

hostPath

 

As of today hostPath is the recommended approach for SASWORK in most cases. Like the name suggests, hostPath allows a pod to directly mount a directory (or drive) from the host’s filesystem. To avoid the issues we just discussed for emptyDir it’s evident that the hostPath directory (or drive) should not be the node’s OS disk. In other words: hostPath usually requires that the node has at least 2 disks attached to it.

 

Not all VM types on Azure provide this additional temporary storage disk out-of-the-box. The ones that do have a lowercase “d” in their names and are slightly more expensive. Their additional disk is directly mounted to the machine which provides very good I/O throughput rates. SAS often recommends the Edsv5 series of machines especially for the SAS compute and CAS node(s) for this reason. Here’s an excerpt from Microsoft’s documentation showing the specs of some typical machine sizes:

 

Size

vCPU

Memory: GiB

Temp storage (SSD) GiB

Max data disks

Max temp storage throughput: IOPS/MBps*

Max NICs

Max network bandwidth (Mbps)

Standard_E8d_v5

8

64

300

16

38000/500

4

12500

Standard_E16d_v5

16

128

600

32

75000/1000

8

12500

Standard_E32d_v5

32

256

1200

32

150000/2000

8

16000

 

 

The additional disk is referred to as “temp storage (SSD)” in the table above. Azure VMs by default mount this drive to their OS disk using /mnt and this is the mount point which can be used for defining the hostPath volume for SASWORK. The required kustomize patch is not complicated to create and documented here: $deploy/sas-bases/examples/sas-programming-environment/storage/README.md. Here’s an example of how the patch will look like:

 

apiVersion: v1
kind: PodTemplate
metadata:
  name: change-viya-volume-storage-class
template:
  spec:
    volumes:
    - $patch: delete
      name: viya
    - name: viya
      hostPath:
        path: /mnt

 

The I/O throughput of volumes mounted like this is usually very good. Unfortunately there are at least two potential disadvantages which may prevent this configuration from being used.

 

To begin with, the use of hostPath is often discouraged. It can be seen as an anti-pattern in the Kubernetes world because it weakens the level of isolation of the pod, making it easier for malicious software to escape the container sandbox and take control over the node. The situation is probably not that dangerous given that SAS sessions run in LOCKDOWN mode by default. LOCKDOWN shuts down SAS language feature that provides direct or indirect access to the operating system shell. Nevertheless, Kubernetes administrators in general will not be overly excited when being asked to provide a hostPath configuration for SAS and there might be business requirements which demand that the LOCKDOWN mode gets relaxed to some extent.

 

Secondly, relying on storage which is directly mounted to the Kubernetes nodes brings some restrictions regarding the available storage volumes. When running on public cloud infrastructure you need to choose between VM instance types of different sizes which means that storage, CPU and memory resources are tightly coupled. This is negatively impacting the overall infrastructure costs, forcing you to select larger (and maybe more) nodes than you would choose if CPU and memory could be sized independently from the storage.

 

Generic ephemeral volumes

 

This is a rather new option which was introduced with Kubernetes v1.23. The Kubernetes documentation states that these volumes can be compared to emptyDir but do provide some advantages, most importantly:

 

  • The storage can be local or network-attached
  • The volumes can have a fixed size (quota) that pods are not able to exceed

Using ephemeral volumes avoids the coupling discussed above – the storage requirements will no longer drive which VM instance types you use for the worker nodes hosting SAS compute workload. This is likely to create much more reasonably sized and more cost-effective cluster topologies. Note that the actual implementation of this API is depending on the infrastructure where your Kubernetes cluster runs. This blog focuses on Azure AKS where ephemeral volumes are implemented using Azure managed disks. Here’s how the kustomize configuration patch for SASWORK could look like:

 

apiVersion: v1
kind: PodTemplate
metadata:
  name: change-viya-volume-storage-class
template:
  spec:
    volumes:
    - $patch: delete
      name: viya
    - name: viya
      ephemeral:
        volumeClaimTemplate:
          metadata:
            labels:
              type: ephemeral-saswork-volume
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "managed-csi-premium"
            resources:
              requests:
                storage: 100Gi

 

This example uses a pre-defined storage class on AKS clusters which is configured to provide Premium_SSD disks. Also note the requested “storage” size parameter – this is how a SAS administrator could set up a quota for users. This parameter sets the maximum size for their data in SASWORK. Let’s see how this looks like from the cloud provider’s point of view. Imagine that a SAS session was started and you would run the “df” command on a node shell on the worker machine:

 

# df -h

Filesystem  Size  Used Avail Use% Mounted on
/dev/sda1   124G   41G   84G  33% /
/dev/sdb1   464G  156K  440G   1% /mnt
/dev/sdg    98G   24K   98G    1% /var/lib/kubelet/plugins/
                                  kubernetes.io/csi/pv/
                                  pvc-4711.../globalmount

 

The device /dev/sdg represents the managed disk which has been requested for SASWORK by one SAS session (i.e by a volumeClaimTemplate section in the PodTemplate).  This disk is also shown in Azure portal:

 

portal1.jpg

 

As a side note: the “df” output shown above was taken from a Edsv5 instance, so /dev/sdb1 refers to the additional disk mentioned in the previous section and we could have used /dev/sdb1 for SASWORK in a hostPath configuration alternatively. There’s one fundamental difference: if we had used /dev/sdb1 and hostPath for SASWORK then all concurrent SAS sessions would need to fit into the 464 GB of storage which are offered by the local disk available on this VM instance type. While using ephemeral volumes allows us to grant 100 GB of storage for each SAS session, regardless of how many concurrent sessions we have (well, that’s not entirely true as there is a maximum of how many disks you can attach to a single virtual machine at the same time).

 

Ephemeral volumes behave exactly as their names suggest – once the SAS session terminates, the associated volume will be unmounted and discarded automatically.

 

Sounds too good to be true? What about disadvantages – are there any at all? It depends … Taking a second look at the screenshot above you might notice that the max IOPS and throughput rates are quite low. In fact, this particular configuration will probably show a pretty bad performance. Why’s that? Well, the volume was requested from the built-in managed-csi-premium StorageClass which provides Premium SSD storage disks. Since we had defined a quota of 100 GB, the CSI storage provisioner decided to provide us a P10 disk with a capacity of 128GB as this was the smallest disk which satisfies our request. And if you check the specifications of the Premium SSD disks you’ll see the same provisioned IOPS (500) and throughput (100) rates as in the screenshot above. These rates are coupled to the disk sizes (P10, P20 etc.) – larger disks provide better performance. This means that we probably have to request larger disks just because we want to see performance improvements.

 

The good news is that this can be addressed with some extra effort. The configuration details are covered in another blog, but here’s the short summary: switching from the built-in StorageClasses to a custom class leveraging Ultra SSD drives and tuning the IOPS and throughput rates results in a much better performance.

 

RWX storage

 

I’d like to cover this option only briefly as it is usually not a recommended approach. Configuring “shared storage” (i.e. storage allowing “RWX” or “ReadWriteMany” access) is easy but often it will not meet the I/O throughput requirements for SASWORK. There are ways to improve the performance (by preferring SAN over NAS devices, by using high-performance filesystem services like NetApp and more), but that again increases complexity and infrastructure costs.

 

Let’s take the file.csi.azure.com storage provisioner as an example, although I'd not recommend using it for this purpose. This CSI driver supports dynamic and static provisioning, so it can create the required Azure Files file shares “on request” or it can use shares which have been created beforehand. In any case check the SAS Operations Guide for an example of how to create the custom StorageClass so that the required mount options are set correctly. Assuming you want to refer to an existing file share and PersistentVolumeClaim then this is how you would configure the file share to be used for SASWORK:

 

apiVersion: v1
kind: PodTemplate
metadata:
  name: change-viya-volume-storage-class
template:
  spec:
    volumes:
    - $patch: delete
      name: viya
    - name: viya
      persistentVolumeClaim:
        claimName: pvc-saswork

 

Conclusion

 

In this blog I have covered some typical choices for configuring the SASWORK storage for SAS compute sessions. The most important take-away is probably that you really need to change the default configuration: staying with the emptyDir configuration will get you into trouble sooner rather than later. The tests I have run using the different options showed that hostPath seemed to provide the best performance, but ephemeral volumes actually came close (when configured to use Ultra_SSDs). Given that Kubernetes administrators might object the use of hostPath, ephemeral volumes can be a suitable alternative.

Comments

Hi @HansEdert,

thank you for summarizing the results of the different storage options!

 

In the findings from our tests (with 2022.1 LTS) we also noticed these disadvantages:

  • SASWORK folders on shared disks are not automatically deleted when the SAS session ends and the pod is destroyed. So cleaning up SASWORK folders is now your responsibility!
  • file permissions on SASWORK folders could become an issue. In our tests the users were able to read and write all SASWORK folders on the disk (also from other user sessions)!

 

So the overall trade-off for higher performance seems to be less comfort, higher costs and impacts on the security.

Best Regards

Andreas

- name: viya
      ephemeral:
        volumeClaimTemplate:
          spec:
           accessModes: [ "ReadWriteOnce" ]
           storageClassName: "standard"
           resources:
             requests:
               storage: 500Gi

hello Hans,

thank you very much for this article, I was searching for something like that. We do not have type defined in our spec, can this be a problem? It is good to know that it a size per session, we have defined 500Gb, maybe that's too much. regards Karolina

Thanks. Gives a good understanding of SASWORK on Viya4.
Do we have anything like this with AWS in mind?

@thesasuser  - Reviewing the SAS Viya System Requirements for resources including storage would be a good starting point:

https://documentation.sas.com/doc/en/itopscdc/v_052/itopssr/n0ampbltwqgkjkn1j3qogztsbbu0.htm

Great Post! 

I keep coming back to this post multiple times when I plan the storage option for viya.

It would be really helpful to have the same guide for other platforms such as AWS and GCP.

Thanks,

Caili

The limit for emptyDir on AKS is 50GB 

@HansEdert 
I think this statement is not true anymore.

I have tested LTS 2024.09 with AKS 1.29 with default setting emptyDir, I could write 100GB data to SASWORK without problems. 

  • hostPath

If we create node pool with KubeletDiskType=Temporary, emptyDir will pickup SSD as volume, there is no need to mount SSD using hosPath.

link:feat: Add OS and Kubelet disk type options. by Carus11 · Pull Request #385 · sassoftware/viya4-iac-a...

  • Quata on SASWORK

I fount that we can set a quota on emptyDir by setting limits of phemeral-storage on podtemplate definition. 

Example:

I setup a ephemeral-storage limit as 50 GB, sas-compute-server-xxx pod will be deleted by k8s when SASWORK exceed 50GB.

---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: add-viya-volume-requests-limits-sas-batch-cmd-pod-template
  # Ensure resources exist
patch: |-
  - op: add
    path: /template/spec/containers/0/resources
    value: {}

  # Ensure requests exist
  - op: add
    path: /template/spec/containers/0/resources/requests
    value: {}

  # Add ephemeral-storage to requests if missing
  - op: add
    path: /template/spec/containers/0/resources/requests/ephemeral-storage
    value: "1Gi"

  # Ensure limits exist
  - op: add
    path: /template/spec/containers/0/resources/limits
    value: {}

  # Add ephemeral-storage to limits if missing
  - op: add
    path: /template/spec/containers/0/resources/limits/ephemeral-storage
    value: "50Gi"
target:
  kind: PodTemplate
  labelSelector: "sas.com/pod-container-image=sas-programming-environment"

 

Version history
Last update:
‎10-22-2022 08:02 AM
Updated by:
Contributors

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags