BookmarkSubscribeRSS Feed

How Infrastructure for Kubernetes Helps to Manage SAS Viya Workloads

Started ‎08-04-2023 by
Modified ‎08-04-2023 by
Views 1,677

Management of Kubernetes workload is often focused on how pods are scheduled to nodes of the cluster. Factors which affect those assignments include the use of labels (and node selectors), taints (and tolerations), pod (anti-)affinities, requests, limits, and more. At another level, SAS Viya Workload Management extends the ability to manage workload into the user domain with the concepts of jobs, queues, resources, and more.

 

Now let's take a step back and review how the infrastructure that was provisioned for SAS Viya has an impact on the distribution of workload in the Kubernetes cluster.

 

As discussed in earlier posts, Kubernetes is responsible for distributing jobs across the cluster and SAS Workload Management further extends this capability to accommodate the unique needs of the SAS Viya platform. As powerful as they are, they can only act by utilizing the resources that are provided by the infrastructure.

 

 

rc_1_workloadhierarchy.png

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

The infrastructure itself can vary widely. The variability across the major cloud providers is notable, but sites also have the option of providing hardware through their own data center, subject to their own standards and practices. For this post, we'll focus on the idea of a managed cloud-based deployment of SAS Viya.

 

Instance Type > Node

 

Provisioning host machines in a cloud environment is typically done by specifying the desired instance (or machine) type. That is, a classification of machine with the desired CPU, RAM, disk, I/O throughput and other capabilities or features. Once the host machine is provisioned as part of the Kubernetes cluster, it's referred to as a node.

 

Instance types come in a range of sizes and capabilities. Some are optimized for general use, whereas others prioritize database, CPU, memory, disk, and/or network I/O. This allows the opportunity to provision machines best suited to your workload.

 

From a Kubernetes perspective, the amount of work a given node will be asked to handle is managed in several ways. For example, Kubernetes defaults to a maximum of 110 pods per node. In a cloud-managed environment, that limit is often much lower, based on the size of the instance type as well as optional enhancements that might be in play, like which container network interface (CNI) is installed.

 

Another example for SAS Viya is if automatic resources are enabled for CAS. If so, then each CAS pod (either worker or controller) will request the super majority of CPU and RAM resources on a node. This ensures that each node is dedicated for use by a single CAS pod with the goal of achieving best possible in-memory analytics performance.

 

Selecting the appropriate size instance type to host your Kubernetes cluster involves numerous considerations and should be undertaken with care. For customers looking for help with getting the right hardware to run their SAS Viya workloads, their SAS account representative can begin an engagement with the World-Wide Sizings Team.

 

Nodes > Node Pools

 

A node pool is essentially a group of nodes in the cluster with the same resources and configuration (that is, the same instance type). One aspect of their design is to provide a definition of scalability. Defining a node pool allows for multiple machine instances to run in support of a specific need. Node pools can be defined to have a minimum, maximum, and desired number of nodes running. These values set the bounds for scaling the number of machines running to have enough for any given demand.

 

Scaling up a single machine instance to have more CPU, RAM, etc. typically requires an outage. Alternatively, if your software supports running multiple copies across different host machines, then you can scale up host resources by starting additional nodes (via the node pool definition). This can be a very effective way to scale up without service interruption and also helps to address high availability requirements as well.

 

There are two major classification of node pools:

  • System node pool - one or more node pools for running the Kubernetes control plane services
  • User node pool - zero or more node pools for running applications

 

A user node pool is essentially where anything that's not directly involved in managing the Kubernetes cluster is run. Any node pools defined for SAS Viya specifically are considered user node pools.

 

Node Pools > SAS Workload Classes

 

From a SAS Viya perspective, workload classes correspond to node pools provisioned for the environment.

 

SAS defines four basic classes of workload for the SAS Viya platform:

  • CAS = high-performance, in-memory analytics engine
  • Compute = traditional SAS Programming Runtime Environment engine
  • Stateful = critical infrastructure services to support SAS Viya operations
  • Stateless = resilient components committed to specific functionality

And don't forget the fifth workload class:

  • The Kubernetes control plane

Workload classes are realized in the Kubernetes cluster through the use of labels and taints on the nodes (and corresponding node selectors and tolerations on the pods, respectively) as well as pod affinities. Essentially this means there's a lot of variability that Kubernetes considers in how jobs get scheduled to run on nodes. Taking those factors into consideration, it's possible to tune the SAS Viya platform to right-size the environment for effective use.

 

Minimum Scale

 

It is technically feasible to run a Kubernetes "cluster" on a single host machine. If that machine has sufficient CPU, RAM, and other resources for the given workload, it could provide the functionality desired. However, that's counter to SAS Viya's expected use of Kubernetes to manage workloads across multiple host machines not only for workload optimization but also to provide robust availability of service.

 

With that in mind, defining node pools should be weighed from a cost perspective in that each node pool must be represented by at least one host machine for it to handle the associated workload class(es). That is, with four SAS workload classes (and a fifth to run the Kubernetes control plane, if desired), then at least 4 (or 5) machines must be up in order for those jobs to have a place to run. If a node pool is scaled down such that zero nodes are up, then some (or maybe even all) of that associated workload has no place to execute.

 

Starting at Standard Scale

 

By default, the SAS Viya platform is configured with four workload classes and they're expected to correspond to four node pools. This helps separate the major service types from each other and allows them to scale independently.

 

Furthermore, availability operations for some processes might require two (or more) replica pods to ensure continuity of service. And some processes will require at least three functionally-identical pods to run the service - with at least two online to reach quorum and sustain operational service. While it's often technically feasible to run two or more replica pods on a single node, that doesn't achieve the goal of high availability, so this is another aspect to consider when tallying the minimum number of host machines running (and hence, cloud costs) to support your SAS Viya platform.

 

rc_2_workloadclasses.png

 

In the illustration above, we show:

  • N nodes for CAS (3 for MPP CAS with basic HA, 4 for full HA, more as needed)
  • X nodes for SAS Compute (2 or more as needed for HA and workload)
  • 3 nodes for stateful services (some require 3 instances for HA)
  • 2 nodes for stateless services (where 2 instances are sufficient for HA)
  • 3 nodes for the Kubernetes control plane (not visible if using cloud-managed Kubernetes)

These values are not requirements, but should be considered as a starting point in planning for any highly available deployment of the SAS Viya platform. The actual values per node pool will vary based on a number of factors.

 

Also, bear in mind that some offerings, like SAS Viya with SingleStore, define additional workload classes per their specific architecture requirements. 

 

Starting at Larger Scale

 

The minimum and maximum number of nodes in a given node pool will depend on many considerations. Let's look at a few from the SAS Viya platform's perspective where we might need to ensure more are running than what's outlined for a typical starting point as shown above.

 

Starting with CAS, it achieves scalability and availability goals by running across multiple hosts (referred to as MPP mode). If running an MPP CAS Server with full availability features and sizable data and/or concurrency requirements, then two controller nodes and several worker nodes will be necessary. At its smallest then, expect at least four nodes to run (2 controller + 2 workers) with additional workers to be considered as workload demands.

 

For some sites, there might be a need to run multiple CAS servers (either SMP or MPP). For example, some analytic tasks might be better served on nodes that have GPUs. Those machines usually cost more to run and so a dedicated workload class (and associated node pool) might be established to manage it.

 

Similarly for workloads in support of the SAS Programming and Runtime Environment (that is, the compute workload class) which include SAS Compute, SAS Batch, and SAS Connect servers, different types of host machines with access to specialized resources and/or data might be useful. Combine this with the SAS Workload Management offering so that jobs can be easily configured to run on the best node for their goals (for required resources, priority execution, lower cost, etc.)

 

In managing and extending the standard workload classes, it might be worthwhile to establish additional workload classes for other analytic workloads if needed to accommodate specialization required at the site. For example, the SAS Micro Analytic Service is a memory-resident, high-performance program execution service intended to support real-time operations (more info). By default, its pods are configured with a node selector to run in the stateless node pool (although it is certainly not a stateless service). If its use is significant, then defining a workload class for MAS will help to ensure it gets appropriate resources for the expected response time.

 

Starting at Smaller Scale

 

Not all sites require high redundancy and maximum performance with critical workload separation. It's possible to run the SAS Viya platform using a more restrained approach with a smaller footprint to reduce overhead and maintenance costs.

 

In that case, the standard set of workload classes might be consolidated to run on fewer node pools with the goal of running on fewer (but possibly larger) host machines. This reduction of complexity won't magically lessen SAS Viya's resource requirements, but for sites which have established that resource consumption will be modest, then optimizing to a smaller footprint is often desirable.

 

We can combine the infrastructure components of the environment to run together. This means creating a single node pool to host the SAS Viya stateless and stateful workload classes. It could also run the Kubernetes control plane if you're responsible for that as well.

 

That leaves the CAS and compute workload classes. The SAS analytic engines can consume significant resources if they're worked hard and so it's worth considering keeping their workload classes assigned to dedicated node pools for efficient and effective performance. However, if your site can tolerate less performance, then combining the CAS and compute workload classes to a single node pool might lead to some cost savings.. 

 

The objective of combining workload classes to run on fewer node pools is to allow for running fewer host machines in minimal circumstances to save money. Be careful, if you're still running numerous hosts (that is, the number of running nodes is greater than the number of node pools defined), then you have less control over the workload and likely are not running efficiently or cost effectively.

 

Coda

 

This post continues the series discussing what it means to scale the SAS Viya Platform along a continuous spectrum. Understanding how the infrastructure is sized, provisioned, managed, and utilized is part of the foundation to ensuring SAS Viya runs efficiently and effectively. Along the way, this post also linked to several posts from my GEL team colleagues and are provided here as well:

 

 

Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
‎08-04-2023 02:05 PM
Updated by:
Contributors

Ready to join fellow brilliant minds for the SAS Hackathon?

Build your skills. Make connections. Enjoy creative freedom. Maybe change the world. Registration is now open through August 30th. Visit the SAS Hackathon homepage.

Register today!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started