BookmarkSubscribeRSS Feed

SAS Workload Management on SAS Viya: Deployment Architecture

Started ‎01-12-2022 by
Modified ‎01-12-2022 by
Views 3,838

SAS Workload Management is generally available on SAS Viya starting with the October/November 2021 releases, including the 2021.2 Long Term Support cadence and the 2021.1.6 Stable cadence.


It’s an add-on offering in the SAS Viya portfolio: this means that it can be added to any SAS Viya product to provide capabilities currently present in SAS Grid Manager on SAS 9, as explained in the post SAS Grid and SAS Viya: together to provide advanced workload management.


How is the architecture impacted when SAS Workload Management is added to a SAS Viya environment? What additional pods are deployed in Kubernetes?


SAS Workload Management Deployment Architecture at a Glance


SAS Workload Management provides multiple additions to a SAS Viya deployment:


  1. SAS Workload Orchestrator Manager – implemented as a Kubernetes Statetefulset
  2. SAS Workload Orchestrator Daemon – implemented as a Kubernetes Daemonset
  3. SAS Workload Orchestrator Page in SAS Environment Manager
  4. SAS Workload Orchestrator Dashboards in Grafana

Plus additional Kubernetes artifacts such as supporting configmaps, secrets, service account, etc.


It’s worth noting that, just like with SAS Grid Manager on SAS 9, a deployment of SAS Workload Management happens “inside” the corresponding SAS Viya environment, i.e. it is deployed inside the same SAS Viya namespace with the pods it will manage. For this reason, there is a 1-1 relationship between each SAS Workload Management deployment and SAS Viya deployment.


The following diagram shows a sample Viya environment deployed on 8 nodes. In this example, nodes 1-2-3 are dedicated to stateful and stateless pods; nodes 4-5 are for compute servers, and nodes 6-7-8 host an MPP CAS instance. The blue horizontal band highlights that, when SAS Workload Management is added to the license, additional pods are deployed on the nodes.



Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.


Let’s review the 4 components listed above, and the pods that host them.


SAS Workload Orchestrator Manager

SAS Workload Orchestrator (SWO) is the main component of SAS Workload Management. This managing component hosts the “business logic”, i.e. the engine of SAS Workload Orchestrator in charge of receiving job submissions, taking scheduling decisions, maintaining the state of the system, and so on.


It is implemented as a Kubernetes Statetefulset, represented in the diagram above with the red pods on the left. For High Availability considerations, the statefulset is defined with 2 replicas by default: the environment will have two pods, called sas-workload-orchestrator-0 and sas-workload-orchestrator-1. By default, sas-workload-orchestrator-0 is active, while sas-workload-orchestrator-1 is on standby, ready to step-in in case the other fails. They have the same labels and tolerations as other SAS stateful pods, so they are started by Kubernetes on the same nodes.


SAS Workload Orchestrator Daemon

SAS Workload Orchestrator Server instances run on every compute server node with the objective of monitoring the node resources and managing the locally running jobs. It then shares this information with the SAS Workload Orchestrator Manager and receives from the Manager the commands to start/stop jobs on its node.


The SAS Workload Orchestrator Server Daemon is implemented as a Kubernetes daemonset with a request to run pods on nodes with the label.


It is important to note that when SAS Workload Management is licensed, it becomes the only entity in charge of starting new SAS compute sessions, by leveraging the SAS Workload Orchestrator Daemon to perform the start/stop commands on the nodes where it is running. For this reason, it becomes mandatory to assign the label to at least one node, and possibly more. Otherwise, the daemon pods will remain forever pending, and no compute jobs will ever be started.


SAS Workload Orchestrator currently does not manage any workload running inside CAS. For this reason, there are no SAS Workload Orchestrator Server Daemon pods running on nodes dedicated to CAS.


SAS Workload Orchestrator page in SAS Environment Manager


SAS administrators interact with SAS Workload Orchestrator using a new dedicated page in SAS Environment Manager:




This page becomes available when SAS Workload Management is licensed and permits to manage, monitor, and configure SAS Workload Orchestrator. In detail SAS administrators can:


  • monitor and manage jobs that are sent to SAS Workload Orchestrator queues
  • monitor, manage, and define queues
  • monitor hosts running jobs from queues
  • define host types
  • view SAS Workload Orchestrator logs and manage log levels

Administrators used to work with SAS Workload Orchestrator on SAS 9 will be immediately familiar with this tool: most of the functionality has been ported to this new release, maintaining a similar interface to streamline the migration experience between the two versions.

SAS Workload Orchestrator Dashboards in Grafana

A SAS Workload Management license gives access to dedicated dashboards in Grafana to monitor the status of cluster nodes and jobs.




If you choose to install the SAS Viya monitoring components, the SAS Launched Jobs - Node Activity and SAS Launched Jobs - User Activity Grafana dashboards are provided to display information about SAS jobs. You can filter the information in the dashboards by queue or by job name, as well as by other criteria.


For more information, see Use the SAS Launched Jobs Dashboards in the official SAS documentation.


Deployment Considerations

Deploying SAS Workload Management is as simple as adding its license to the SAS Viya environment that you are going to install. After this, follow normal SAS Viya instructions to install the software, and all the components that we have described will be there.


To enable full functionality, though, a Kubernetes administrator may have to perform a few additional steps.

  1. Define ClusterRoles and ClusterRoleBindings
    The SAS Workload Orchestrator daemons require information about resources on the nodes that can be used to run jobs. In order to obtain accurate resource information, add a ClusterRole and a ClusterRoleBinding to the SAS Workload Orchestrator service account. This can be done during the deployment by adding the sas-workload-orchestrator overlay to the resources block of the base kustomization.yaml file:


    - sas-bases/overlays/sas-workload-orchestrator

    This can also by done by a Kubernetes administrator by using the ClusterRole and ClusterRoleBindings yaml files found in the same directory.

  1. Tune memory and cpu limits and resources for SAS Workload Orchestrator pods.
    The memory limit request values for the SAS Workload Orchestrator stateful set containers or daemon set containers might need to be increased so that more jobs can be processed. The default settings provide reasonable values for an initial deployment but may need tuning as more jobs get submitted to the environment.

  2. Use node labels to control where SAS jobs run.
    We have already seen that SAS Workload Orchestrator can only run jobs on nodes on which the SAS Workload Orchestrator daemonset is present and that the daemonset requires the label.
    Kubernettes administrators can define additional labels and assign them to nodes; SAS administrators can then reference those labels in queue definitions via host types definitions. In this way, it is possible to segregate certain types of processing to specific nodes – for example, batch or interactive jobs, or jobs that require specific hardware capabilities.

All these tasks are described in the official SAS documentation, in the page SAS Workload Orchestrator Tasks for Kubernetes Administrators.




SAS Workload Management extends the workload management capabilities of Kubernetes bringing SAS Grid Manager capabilities to SAS Viya. Including SAS Workload Management in your SAS Viya license impacts the deployment architecture with additional pods deployed in the environment and provides dedicated management interfaces for SAS Administrators.

You can read @FrederikV's hands-on experience with SAS Workload Management in his recently published article "SAS Workload Management" 


Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
‎01-12-2022 04:02 PM
Updated by:



Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags