BookmarkSubscribeRSS Feed

5 SAS Viya Workload Management Customizations You Didn’t Know You Needed

Started yesterday by
Modified yesterday by
Views 70

Every SAS Viya platform deployment includes the SAS Workload Orchestrator, a powerful tool for managing SAS programming runtime workloads. But every organization is different, and you can tailor it to your needs with a suite of deployment customizations. In this article, we’ll walk through each customization option, exploring not just the “how,” but also the “why”.

 

 

Getting Started: What Can You Customize?

 

By default, SAS Workload Orchestrator (SWO) is deployed as part of the SAS Viya platform. However, you’re not locked into a one-size-fits-all setup. SAS Viya provides optional transformers that let you tailor the deployment to your needs. The main areas you can customize are:

 

  • Disabling the service
  • Defining ClusterRole and ClusterRoleBinding
  • Customizing manager and server pod resource requests and limits
  • Mounting volumes with user-defined scripts
  • Loading an initial configuration at deployment time

 

Let’s dive into each of these, drawing on the practical notes and recommendations from both the official documentation and the README file that you can find in your deployment assets at $deploy/sas-bases/examples/sas-workload-orchestrator/configure/README.md.

 

 

1. Disabling the Service: When Less is More

 

Not every customer needs workload management. If you’re not leveraging workload management capabilities, you might prefer not to run the SAS Workload Orchestration pods at all. SAS Viya lets you disable SAS Workload Orchestrator either automatically (permanently) or manually (temporarily). The distinction is important: automatic disabling persists even after updates, while manual disabling is reset if you update the deployment.

 

Both methods involve disabling the SAS Workload Orchestrator statefulset and daemonset, and setting an environment variable in the SAS Launcher pods to communicate the service’s status. If you want to automate this, add a patch transformer to your main kustomization.yaml file; this ensures no SAS Workload Orchestrator pods are created:

 

transformers:
...
- sas-bases/overlays/sas-workload-orchestrator/enable-disable/sas-workload-orchestrator-disable-patch-transformer.yaml

 

For manual control, you can use the kubectl patch command to apply the patch files found in the $deploy/sas-bases/overlays/sas-workload-orchestrator/enable-disable directory.

 

The takeaway: SAS Workload Orchestrator is enabled by default, so unless you specifically disable it, it’ll be running. This option is especially useful for customers who currently prefer not to use SAS Workload Orchestrator to manage their SAS programming runtime workloads.

 

 

2. Defining ClusterRole and ClusterRoleBinding: Unlocking Full Functionality

 

To get the most out of SAS Workload Orchestrator, you need to grant it permission to gather information about resources on your Kubernetes nodes. This is done by adding a ClusterRole and ClusterRoleBinding to the sas-workload-orchestrator service account. Why does this matter? Without these permissions, SAS Workload Orchestrator can’t accurately assess node resources, which can limit its scheduling capabilities.

 

The deployment guide recommends including the $deploy/sas-bases/overlays/sas-workload-orchestrator overlay in your kustomization.yaml. This step is automated if you use the SAS Viya Deployment GitHub project.

 

What information does this unlock? SAS Workload Orchestrator can now see:

 

  • Node presence (is the node known to Kubernetes?)
  • Allocatable cores and memory (what’s available for scheduling?)
  • Node labels (used for matching nodes to host types and integrating with the Cluster Autoscaler)
  • Node ‘unschedulable’ status (has the node been cordoned off?)
  • Resources used by pods from other namespaces (to avoid overcommitting resources)

 

If you skip this step, workload management still works, but you’ll lose some automation. This may lead to admins requiring to manually tune or manage nodes e.g. like preventing pods from third party applications namespaces from running on SAS Viya nodes, or manually closing hosts (nodes) in SAS Environment Manager when a node is cordoned.

 

 

3. Customizing Manager and Server Pod Resource Requests and Limits: Scaling for Your Needs

 

Kubernetes pods have resource requests and limits for CPU and memory, and SAS Workload Orchestrator is no exception. SWO Manager pods handle REST API calls and manage host, job, and queue information, while SWO Server pods interact with Kubernetes to manage resources and jobs on individual nodes.

 

Default values support hundreds of concurrent jobs, but if you need to scale further, you can increase these pods’ resource requests and limits. SWO Manager pods typically use more resources than SWO Server pods, so plan accordingly.

 

If you’re running multiple hundreds or thousands of jobs concurrently, you can tune these settings. SAS provides sample transformers in the $deploy/sas-bases/examples/sas-workload-orchestrator/configure folder to help with this configuration. The key is to set identical values for requests and limits: this ensures Guaranteed Quality of Service for SAS Workload Orchestrator pods.

 

 

4. Mounting Volumes with User-Defined Scripts: Custom Metrics for Smarter Scheduling

 

Sometimes, you need to schedule jobs based on custom metrics, like the number of GPUs on a node or available disk space. SAS Workload Orchestrator supports scripts that measure user-defined resources.

 

SWO Manager pods run scripts for global resources (e.g., total licenses of some product that can be used anywhere on the cluster), while SWO Server pods handle node-specific metrics (e.g., GPU count or available disk space). This flexibility lets you tailor scheduling to your environment’s unique needs.

 

Here’s how it works: place your scripts in a Kubernetes volume named “scripts,” and mount the volume in the StatefulSet or DaemonSet definition. SAS Workload Orchestrator will automatically map the volume and look for scripts in the “/scripts” folder.

 

You can find sample transformers to configure the volume mounts for both the StatefulSet or DaemonSet in the $deploy/sas-bases/examples/sas-workload-orchestrator/configure folder.

 

 

5. Loading an Initial Configuration at Deployment Time: Automation-Friendly Deployments

 

Starting with SAS Viya 2024.09, the default SAS Workload Orchestrator configuration is loaded at deployment time from a Kubernetes ConfigMap called sas-workload-orchestrator-initial-configuration. If you want to preload your own configuration, say, to integrate with automation pipelines, you can modify this ConfigMap with a patch transformer. SAS provides a sample patch transformer … yes, you guessed right, in the $deploy/sas-bases/examples/sas-workload-orchestrator/configure folder.

 

A practical tip: export a JSON configuration from a test environment and load it into the ConfigMap during the deployment of another environment, such as production. Remember, the ConfigMap is only read during initial deployment. After that, you can load custom configurations via SAS Environment Manager or the sas-viya CLI.

 

 

Summary

 

SAS Viya’s Workload Orchestrator is designed for flexibility. Whether you’re fine-tuning resources, providing custom metrics, or targeting specialized hardware, these customizations empower you to optimize your analytics environment. With thoughtful configuration, you can scale, automate, and target resources for maximum efficiency. Are you ready to try it yourself? Enroll in the Architecture and Administration for SAS® Workload Management on SAS® Viya® course, available on learn.sas.com.

 

 

Find more articles from SAS Global Enablement and Learning here.

Contributors
Version history
Last update:
yesterday
Updated by:

hackathon24-white-horiz.png

The 2025 SAS Hackathon has begun!

It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates.

Latest Updates

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags