The SAS Viya Platform runtimes (SAS Compute Server, CAS) are designed to be I/O intensive so as to get data into and out of the CPU for various compute loads like data management and analytics.
Data is the fuel of analytics computing engines. The faster SAS can read, process, and write the data; the sooner SAS can produce the intelligence to make the smartest decisions.
This principle remains true and key in the new world of the Clouds and Containers.
There are 2 kind of storage that are required by SAS :
In this article we want to look at the latter kind of storage and see what solutions are available in the Cloud (more specifically in Azure Kubernetes Services - aka "AKS" - which seems to be the most popular platform for SAS Viya as of today).
If you want to make the most of this reading, it is assumed that you have basic sysadmin skills and understand the principles of Kubernetes storage. If it is not the case, a great starting point would be my colleague Rob Colum’s article on this topic.
Before diving into the details, let’s agree on a terminology for the temporary storage to make sure we are talking about the same things.
The terms “ephemeral storage” or "temporary storage" can correspond to very different things and different lifetime depending on the context in which you use it :
In the Cloud ecosystem, the “ephemeral storage” can take a variety of forms (Azure temporary SSD disks, Azure LSv2 NVME flash disks, Azure Ephemeral OS support in AKS, local SSDs for GKE, etc..). Each form of storage usually comes with various rules, constraints, sizes, and performance.
We’ll come back to the available possibilities for the Azure Kubernetes Service, in the following sections.
Attention : The information about the instance characteristics is subject to change. The discussed instances specifics are the ones as of October 2021 (always refer to the latest version of the Azure VM sizes and pricing page).
After a fresh standard deployment of SAS Viya, if you look at the type of volumes used for the SAS temporary files (SASWORK and CAS Disk Cache), you will quickly find out that the Kubernetes “emptyDir” volume type is used. It is the default configuration with SAS Viya 4.
As an example, we can see in the site.yaml
extract below that emptyDir
is used for the path which corresponds to the SASWORK in the Pod Template definition of sas-compute-server.
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
While emptyDir
is ephemeral in the Kubernetes sense (it gets deleted as soon as the pod instance is scaled down to 0 or deleted), it usually corresponds to a fixed location on the operating system disk.
The emptyDir volumes content physically points to /var/lib/kubelet/pods/<PodID>/plugins/kubernetes.io~empty-dir/
In Azure :
For all these reasons, the default emptyDir
volume type should not be used for SASWORK and CAS Disk Cache in production deployments of SAS Viya in Azure.
Just like with SAS 9 and SAS Viya 3.5, changing from the default SASWORK and CAS Disk Cache locations to dedicated File Systems will very likely be one of the first thing required at the customer site.
So, the good news is that Kustomize PatchTransformers examples and associated README files are provided (in the sas-bases folder) to change the default SASWORK and CAS Disk Cache locations.
Here is an extract of the transformer example to patch the sas-compute pod templates so it uses a different volume type corresponding to SASWORK.
And here is another one (from the SAS documentation) to mount a specific volume for the CAS Disk Cache and change its location.
nfsPath
will generally not be a good option, because we are looking for high performance storage, supporting of high level of concurrent read and write operations in those SAS temporary areas. The NFS protocol is usually not suited for such type of access.
In addition, nfsPath
typically corresponds to a persistent storage which does not really correspond to our needs.
So, the hostPath
volume type is likely a better option for these SAS temporary file’s locations.
But now the question becomes “What medium can I use in Azure for the node’s hostPath volume” ?
The answer to this question really depends on the infrastructure capabilities. So, in this case, we need to explore and review what are the local storage options in Azure 🙂
As explained in the Azure documentation, “there are three main disk roles in Azure: the data disk, the OS disk, and the temporary disk”.
These roles map to disks that are attached to your virtual machine.
The official documentation tells us a little bit more about these three roles:
/dev/sdb
When you provision your AKS cluster with the IaC (Infrastructure as Code) tool for azure, you can specify the OS disk size with the os_disk_size
value as shown below:
As a result, if you connect to your AKS CAS node, you will see that the sda
disk that corresponds to your 200GB OS disk mounted on the root mountpoint (“/”).
But... if you look carefully at the screenshot you might also notice that there is an additional 64GB disk mounted on /mnt
.
It corresponds to the Azure instance “temporary disk” and its size really depends on the instance type and size. Some instance types don’t have any temporary storage, others provide it with local SSD drives, but their size depends on the VM size. If you check out the instance type page there, you will see that the temporary storage for the “Standard_E4s_v3” instance (set in the terraform variable) is 64GB.
Here is an example for the E<X>ds_v4-series instances (currently recommended by the Global Sizing team and in recent Technical Papers on storage best practices in Azure)
X corresponds to the number of vCPU on the Azure instance and when it increases, the size of the temporary disk also increases (as well as the cost of course…).
So, while we would generally prefer to use managed disks for our persistent volumes, it seems that this temporary storage could be a good solution for our SAS temporary files (SASWORK/UTILS and CAS Disk Cache).
One caveat is that the available disk space could be a little bit limited in case the customer would have a lot of data to process. In such a case using a Lsv2-series instance could be an interesting alternative.
According to the Azure documentation : “The Lsv2-series features high throughput, low latency, directly mapped local NVMe storage running on the AMD EPYCTM 7551 processor… There is 8 GiB of memory per vCPU, and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB (10x1.92TB)”
Let’s have a look at the available Lsv2 instances.
Interesting ! Local NVMe storage and up to almost 20 TB !
Here is what you’ll see if you connect to an Lsv2 instance:
Unlike with the temporary disk, there aren’t any volumes from the NVMe disk mounted (there aren’t any “partition" entries shown for the nvme01n1
disk). So, it can’t be used directly after the VM has started, we’ll need to perform some storage operations (for example format, stripe the disks then mount the volume as a new file System and create some directories).
It looks like Lsv2 might be an interesting type of instances for our CAS and compute nodes if we can redirect our SASWORK and/or CAS Disk Cache on these NVMe drive(s).
One factor to consider is that unlike other instance types, the Lsv2 instance uses AMD processors and not Intel (While AMD processors are technically supported but Intel Xeon are strongly recommended for CAS servers as noted there).
Using Cloud ephemeral storage for the SASWORK and CAS Disk Cache is not something new. It has already been done many times in the field either with SAS 9 or Viya 3.5 and Cloud Virtual Machines.
It requires two things :
It is similar with Viya 4: we need a “File system preparation” script that can run without manual intervention in case of node restart, failure, or automatic scaling. Remember that in AKS we are using Node pools with a minimum and maximum number of nodes, which means that nodes could be decommissioned and re-provisioned on the fly.
However, what is different with Viya 4, is that it runs inside a managed Kubernetes cluster. Injecting a bootstrap script on a managed node is not really possible or even allowed by the Cloud provider…
So instead of integrating our “File system preparation” script in the “init” linux services, the idea is to use a Kubernetes object.
According to the Kubernetes documentation “A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.”
Several “opensource” projects provide this "DaemonsSet" implementation to automate the temporary storage preparation on the AKS nodes, for example :
However as of today, there is no SAS officially supported or documented method to use such techniques to configure performant ephemeral storage e.g. SSD or NVMe.
As explained in the Azure Kubernetes Services documentation: “By default, Azure automatically replicates the operating system disk for an virtual machine to Azure storage to avoid data loss …. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This provides lower read/write latency, along with faster node scaling and cluster upgrades.”
So using ephemeral OS disk presents several benefits in AKS:
However, the constraint of ephemeral OS disk is that when using it, the OS disk must fit in the VM cache.
It means that the node’s OS disk size (that you set when you define your node pools) cannot exceed this VM cache value that is specific to the instance type (usually 2/3 the space provided by the Temp Storage device) and provided in the instances description table.
When we look at the instance types tables, the VM cache value is the number in parentheses next to IO throughput ("cache size in GiB").
As an example, on a Standard E32ds_v4
instance the ephemeral OS size cannot exceed 800GB. It also means that when using the IaC tool and setting the value for the os_disk_size
variable you must ensure it is below the “cache size” number corresponding to the instance.
Keep in mind that this space is also used for the Operating system and the docker images. Its totality cannot be dedicated to SASWORK and/or CAS Disk Cache.
When using the ephemeral OS disk we still have the risk to blow up the node/VMs root FileSystem by filling up /var/lib/kubelet
when the user processes with "large" data sets. With ephemeral OS disk the SASWORK and CAS Disk Cache areas are not in isolated distinct file systems.
Note : while it has not been tested the pods can be defined with an “ephemeral.storage” resource request and limits. Implementing such definitions might prevent over-utilizing the node/VMs OS disk space. Instead, it would trigger the node autoscaling when there is no more available ephemeral storage on the node.
When using ephemeral OS disk, it is possible to keep the default “emptyDir” volume definition and avoid the PatchTransformer and Daemonset configuration steps altogether. However, in such case we still have the 50 GB limits per emptyDir in AKS.
Here is a little table that summarized the available options for the SAS Temporary files storage options in Azure.
Links to instances types : Edsv4-series and Lsv2-series
There can also be some additional security considerations to take into account when choosing the local storage. Security concerns have been raised regarding the hostPath volume type. In Azure, Gatekeeper and Azure Policies could be used to address things like the hostPath problem.
Another potential solution, if hostPath volumes are not allowed would be to use a storage provisioner and a PV instead (see this page for additional details).
As we can see, there is no perfect solution. each solution has benefits and concerns.
Finally, as often, it will be a matter of choosing the option that best suits the customer’s needs and requirements.
However, as my dear colleague Rob told me the other day, “for SASWORK and CAS Disk Cache, we should be able to take advantage of any ephemeral storage available on the node.” This I think is a good summary of this article 😊.
In a follow-up article I’m planning to dive a little bit more in the implementation details and show examples of how these solutions can be implemented.
Find more articles from SAS Global Enablement and Learning here.
Thank you for this article. We had in our environment cas disk cache left to empty dir. I found here the answer how much GB it is and why it will cause a problem. Unfortunatly our system is not on Azure, so we will have to check if we have also a temporary disk. Regards Karolina
I was wondering if kubernetes ephemeral storage be used for cas disc cache if you remove empty dir and instead put there a larger value 200Gi. Pls advise if you can as we are struggling with loading three large tables and eviction manager who is killing all the pods on the node.
@RPoumarede - Links to SAS Documentation in this article are broken, specifically in the sentence "And here is another one (from the SAS documentation) to mount a specific volume for the CAS Disk Cache and change its location."
Hi @brusso
Thanks for reporting the issue. I have fixed the link to the SAS documentation.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9. Sign up by March 14 for just $795.
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.