Last month, our team was contacted, by a colleague in the field, to help with a customer request. The customer was interested in discussing the PSS (Pod Security Standards) settings in relation to a default SAS Viya installation.
More specifically, they were asking for guidance regarding several perceived security violations in their environment after the deployment of SAS Viya 2025.01.
Looking into the details of the "policies violation" report and with the help of colleagues experienced in this area (Alexander Koller, Carus Kyle) we were able to provide some guidance.
We've now had a number of queries about Kubernetes Pod Security Standards and other kind of Pod’s policies in a Kubernetes cluster, so we thought it may be the right time to write a little blog to discuss this topic through a customer use case. 🙂
First, a little reminder about PSS
As a Kubernetes administrator in charge of the Cluster security, you should implement Security Standards for your Pods. Until version 1.25, "Pod Security Policies" was the mechanism provided in Kubernetes, to implement these standards. However, while they allowed a very granular control of the pods, there were complaints about the complexity of this "fine grain" policies.
As a result, from Kubernetes 1.25 the "Pod Security Policies" were removed, and administrators now have the choice between:
Pod Security Admission : a built-in Kubernetes mechanism to enforce Pod Security Standards with 3 levels of isolation: privileged, baseline, and restricted.
a 3rd party admission plugin, that they can deploy and configure themselves.
So, the security rules that exist in a Kubernetes Cluster (and that can prevent a SAS Viya deployment) are either coming from the Kubernetes PSA configuration or third-party plugins (such as GateKeeper/OPA, Kyverno, etc…).
One major difference between the deprecated "Pod Security Policies" and the new "Pod Security Admission with the Pod Security Standards" is that, once a PSS profile (baseline or restricted) is chosen, it is not possible to define exceptions.
List of policy violations
Now let’s come back to our customer’s case, so we can illustrate the PSS concept.
Here is the “policies violations” report as provided by the customer:
#
Name
Total Violations
Description
1
container-must-have-limits
1
violations
- enforcementAction: deny
group: ''kind: Podmessage: >-container <sas-cas-server> memory limit <28Gi> is higher than the maximum allowed of <8Gi>
name: sas-cas-server-default-controller
namespace: sasviya
version: v1
2
psp-pods-allowed-user-ranges
879
violations:
- enforcementAction: deny
group: ''
kind: Pod
message: >-
Container hydrator is attempting to run without a required
securityContext/runAsGroup. Allowed runAsGroup:
{"ranges": [{"max":
1337, "min": 0}], "rule": "MustRunAs"}
name: sas-annotations-6644dc6f99-cmz97
namespace: sasviya
version: v1- enforcementAction: deny
group: ''
kind: Pod
message: >-
Container hydrator is attempting to run without a required securityContext/runAsUser
name: sas-annotations-6644dc6f99-cmz97
namespace: sasviya
version: v1
3
psp-allow-privilege-escalation-container
1
violations:
- enforcementAction: deny
group: ''
kind: Pod
message: 'Privilege escalation container is not allowed: sas-opendistro-sysctl'
name: sas-opendistro-default-0
namespace: sasviya
version: v1
4
psp-host-filesystem
1
violations:
- enforcementAction: deny
group: ''
kind: Pod
message: >-
HostPath volume {"hostPath": {"path": "/mnt/nvme-disks", "type": ""},
"name": "cas-disk-cache"} is not allowed, pod: sas-cas-server-default-controller. Allowed path: [{"pathPrefix": "/foo",
"readOnly": true}]
name: sas-cas-server-default-controller
namespace: sasviya
version: v1
5
psp-privileged-container
1
violations:
- enforcementAction: deny
group: ''
kind: Pod
message: >-
Privileged container is not allowed: sas-opendistro-sysctl,
securityContext: {"allowPrivilegeEscalation": true,
"capabilities":
{"drop": ["ALL"]}, "privileged": true,
"readOnlyRootFilesystem": true,
"runAsNonRoot": false}
name: sas-opendistro-default-0
namespace: sasviya
version: v1
Let’s review each type of violation and try to understand what it means.
#1 container-must-have-limits
As the name suggests, this policy seems to enforce restrictions on the resource limits that can be set for a container.
The description (or violation’s message) is pretty clear here: the resource limit for the memory is set to 28Gi in our sas-cas-server-default-controller container. But it is higher than the maximum allowed of 8Gi.
The CAS server has unique attributes and requirements. Because it has been engineered to use all available resources to complete each request in the shortest amount of time, CAS is usually configured to run on dedicated nodes, using most of the available resources.
By default, CAS operates with auto-resourcing enabled, allowing the server to dynamically adjust resource usage. In such case, the CAS Operator scans node specifications and allocates around 80% of the available memory to CAS worker pods. It is likely the reason why the memory is set to 28Gi, which does not meet in this case, this custom and general policy of not exceeding the 8Gi for any container.
#2 psp-pods-allowed-user-ranges
According to its name and the associated description, this policy is related to the user account used to run the container.
When looking at the description, it looks, at first, like there is a specific issue with the sas-annotations pod and its hydrator container …But in hindsight, it is likely that this violation has been actually observed for most of the SAS Viya containers, because there are 879 violations of the policy 😊 (So it probably also includes the init containers referenced in the Viya pods)
According to the violation’s message, the broken rules here are that neither runAsGroup nor runAsUser specification are set in the Security Context of the containers.
In Kubernetes, a Security Context "defines privilege and access control settings for a Pod or Container". It can be used to force all the processes inside a container to run under the same user, add or remove Linux capabilities for a Container and many other things in terms of security controls...Refer to the Kubernetes documentation for more details on Security Contexts.
When looking at the securityContext specification of the hydrator container in the sas-annotation pod, we can, indeed, confirm that the absence of runAsGroup and runAsUser specifications:
Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.
I'm using k9s in this example, but you could use a kubectl command to get the same information.
You can even use the script below to run a report across all the SecurityContext that are set for the pods and containers in a given namespace :
The Security Context can be set at both the Pod level and Container level. The report tells you, for each pod, if a PodSecurityContext is set and for each container inside the pod if a SecurityContext is set. When set, the securityContext specification is provided.
Most of the time for all the SAS Viya pods, you'll see that the runAsGroup or runAsUser attributes are not set in the SecurityContext:
The only pods (in this 2025.02 Viya deployment) with these attributes set, are the CAS and sas-pyconfig pods.
Example:
Here, the runAsUser field specifies that for any Containers in the CAS Controller Pod, all processes run with user ID 1001.
Although the runAsUser is not set for (almost) all the SAS Viya pods, if you run the ps -ef command inside a SAS Viya container, you would see that all the processes run under the sas account. It’s because in Kubernetes, when runAsUser is not set inside the SecurityContext, the main process of the container runs with the user ID specified in the container image (USER instruction in the container’s image Dockerfile).
Additionally, while the runAsGroup or runAsUser attributes are not set, what we see in the report is that most of the SAS Viya pods have the runAsNonRoot attribute set to truein their PodSecurity context (which requires all the containers in the pod to be run as non-root users).
Interestingly, setting this runAsNonRoot attribute to true meets one of the policy requirements of the restricted isolation level in the Kubernetes Pod Security Standard definition…) - as you can see below...
#3 psp-allow-privilege-escalation-container
This policy is related to the privilege escalation (such as via set-user-ID or set-group-ID file mode) that, may be sometimes required for some of the SAS Viya components or for specific Viya customizations…
The message tells us that the “Privilege escalation container is not allowed” for the sas-opendistro-sysctl container inside the sas-opendistro-default-0 pod.
You can confirm it if you look at the container’s Security Context:
It is a known and documented issue with the deployment of OpenSearch inside the SAS Viya platform:
The offending container is created by the sysctl-transformer.yaml that is set by default in the "transformers:" section of the kustomization.yaml file :
sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
As documented there in the SAS Viya Platform Operation guide, the purpose of the transformer is to set the vm.max_map_count kernel parameter on the OpenSearch underlying host to ensure there is adequate virtual memory available for accessing the search indices.
The SAS Documentation explains that, instead of using the transformer, it is possible to set the vm.max_map_count in advance on the hosts where the OpenSearch pods will be running (stateful nodes).
While you could do it manually on the nodes using the commands provided in the SAS documentation (basically run the sysctl -w vm.max_map_count=262144 command as root on the host), you can plan it in the infrastructure provisioning, using a custom script (or "user data") to be run when the VM is provisioned (see examples in Azure Google Cloud or AWS).
Also note that a PR (Pull Request) has been submitted in the IAC for Azure GitHub repository to enable Terraform to implement this setting when the Stateful nodes are provisioned.
#4 psp-host-filesystem
Once again, the policy violation is pretty clear: HostPath volume {"hostPath": {"path": "/mnt/nvme-disks", "type": ""}, "name": "cas-disk-cache"} is not allowed.
While it’s often the most performant option, the usage of the hostPath volume type is generally seen as a security vulnerability in Kubernetes.
According to the violation description, hostPath volume type has been configured for the CAS Disk Cache location.
A possible workaround is to: not mount the hostPath location directly to the pod, but instead, create a PersistentVolumeClaim pointing to a PersistentVolume using the local-storage or manual Storage classes (Alexander Koller also provides nice examples there).
If you are deploying SAS Viya in opensource Kubernetes, a local-storage should already exist for you.
If you are deploying in the Cloud, you can also explore different options for the CAS Disk Cache, such as the Kubernetes Generic Ephemeral Volumes. If SAS Viya is deployed in Azure, you could also use the emptyDir volume type with ephemeral OS disk in Azure (as demonstrated in this opened Pull Request that has been submitted in the IAC for Azure GitHub repository).
But I’ll stop here, as the topic of "avoiding the use of hostPath and NFS volumes in a SAS Viya deployment to comply with security standards" would deserve its own post 😊.
#5 psp-privileged-container
Once again, the issue here is with the OpenSearch sysctl container.
According to the message, what triggers the violations here are the privileged: true, and runAsNonRoot: false specifications in the container’s securityContext specifications. Note that, in the Kubernetes built-in PSSs, these settings are respectively forbidden with the baseline and restricted profiles.
As explained above, this violation is the result of the sysctl-transformer.yaml transformer that is included by default in the kustomization.yaml for the OpenSearch container and the same solutions/workarounds apply.
Summary and answers for the security violations
So, let’s summarize the violations and solutions or workarounds in a table (as they were provided to our colleague working with the customer).
This information will help the customer to understand the reason why some constraints were not observed during the Viya deployment and how to fix the issue.
#
Name
Description
Notes/Solution
1
container-must-have-limits
message: >-
container <sas-cas-server> memory limit <28Gi> is higher than the maximum allowed of
<8Gi>
Policy Source: Gatekeeper custom policy.
The CAS CPU/memory resources settings (limits/request) can be changed, and the memory limit can be set below 8GB. However, it will likely impact the CAS topology, workload placement, performance and potentially limit the amount of data to be loaded.
To be clear: yes, we can tune the CAS container, so its memory limit meets the 8GB maximum allowed but does it fit with the Architecture design / requirements agreed with the customer regarding CAS? (Most of the time, CAS is the component in SAS Viya that is expected to fully utilize host RAM in most scenarios.)
If it does not, the customer could consider adding an exception to the policy for this specific pod.
2
psp-pods-allowed-user-ranges
message: >-
Container hydrator is attempting to run without a required securityContext/runAsGroup. Allowed runAsGroup: {"ranges": [{"max": 1337, "min": 0}], "rule": "MustRunAs"} name: sas-annotations-6644dc6f99-cmz97message: >-
Container hydrator is attempting to run without a required securityContext/runAsUser
Policy Source: Gatekeeper custom policy.
This policy controls the user and group IDs of the container (more specifically runAsUser, runAsGroup, supplementalGroups, fsGroup specs...).
In SAS Viya, except for CAS and Compute pods, the runAsUser, runAsGroup attributes are not set.
As a workaround you can write a patchTransformer that adds the runAsUser attribute to the pod definition with the correct user ID 1001. Since most of the SAS Viya pods have the runAsNonRoot parameter set to true, asking the customer for an exception is another option to consider.
3
psp-allow-privilege-escalation-container
message: 'Privilege escalation container is not allowed: sas-opendistro-sysctl'
Policy Source: Kubernetes PSS, restricted isolation level.
See the OpenSearch section in the SAS Viya operations guide.
4
psp-host-filesystem
message: >- HostPath volume {"hostPath":
{"path": "/mnt/nvme-disks",
"type": ""}, "name": "cas-disk-cache"} is not allowed, pod: sas-cas-server-default-controller.
Policy Source: Kubernetes PSS, baseline isolation level.
If hostPath is not allowed, then the customer needs to find another Kubernetes volume type for CAS disk cache. Using the Generic Ephemeral Volume with an AWS managed disk may be a good solution.
Performance may be degraded compared to using local NVME drives though.
5
psp-privileged-container
message: >-
Privileged container is not allowed: sas-opendistro-sysctl, securityContext: {"allowPrivilegeEscalation": true, "capabilities":
{"drop": ["ALL"]}, "privileged":
true, "readOnlyRootFilesystem": true,
"runAsNonRoot": false}
Policy Source: Kubernetes PSS, baseline and restricted isolation level.
See the OpenSearch section in the SAS Viya operations guide.
Conclusion
More and more customers, investing in Kubernetes, are now implementing cluster security policies, using the Kubernetes built-in Pod Security Standards and/or 3 rd party tools.
In this particular case, the customer was deploying SAS Viya in AWS. They had a mix of PSSs, with the restricted isolation level and also specific Gatekeeper constraints.
Note that we have seen some customers cases where additional security rules were also enforced by turning on the Cloud vendor’s built-in policies engine, such as the Azure AKS Built-in policies…
On the other hand, the SAS Viya has very specific requirements, support a wide range of customization that may not meet the generic policies implemented in the cluster by the customer.
That’s the reason why it is important to understand exactly what the constraints are, in order to determine workarounds or required configuration changes in the SAS Viya platform (when available).
Sometimes, the violations of the policies in place are indicative of a real problem. Other times, like here, they simply need an explanation and sometimes an exception to the policies can be implemented.
While it is possible, in general, for SAS Viya to run in a cluster where the Kubernetes built-in restricted PSS is enforced, it often requires to find alternatives to commonly used implementations (ex: CAS and Compute access to data through NFS or hostPath, enable host account on CAS) and some specific customizations may require workarounds or may even not be possible (ex: enable SAS Watchdog to monitor SAS sessions, executing python code in MAS, etc.).
That's it for today, I hope it was helpful !
References
SAS Viya Platform Operations documentation: Pod Security Admissions
SecurityContext : https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Pod Security Standards, profile details: https://kubernetes.io/docs/concepts/security/pod-security-standards/#profile-details
Find more articles from SAS Global Enablement and Learning here.
... View more