BookmarkSubscribeRSS Feed

Creating custom SAS Viya topologies – Part 2 (using custom node pools for the compute pods)

Started ‎03-13-2022 by
Modified ‎03-13-2022 by
Views 2,954

In my last post, I described how to realize your SAS Viya workload placement plan (see here). In that article I discussed creating node pools to dedicate nodes to running SAS Micro Analytic Service (MAS) pods and the CAS Servers when running multiple SAS Viya environments (namespaces) in a shared Kubernetes cluster.

 

Prior to that and more recently, I have been asked about dedicating nodes to the Compute Server, or more correctly put, the ‘sas-programming-environment’ pods. In this blog I will share the required configuration changes and my thoughts on creating custom node pools to support the compute workloads.

 

First, we will look at the new workload placement plan, the target topology.

 

In this example, we will once again look at running two SAS Viya environments (production and discovery). As per last time, the stateless, stateful, connect and realtime nodes are shared by both SAS Viya environments, and the CAS Servers are running on dedicated, or separate, node pools for each environment.

 

But now we will add a new node pool for the programming workloads for the discovery environment. The ‘compute’ node pool is dedicated to the production environment and the ‘comp2’ node pool is dedicated to the discovery environment. Figure 1 illustrates my new workload placement plan.

MG_1_compute_topology.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 1. Target topology running two SAS Viya environments.

 

A key driver for using this configuration would be the need to use different instance types (remember instance types vary by type and number of CPUs, RAM, local disk, ...) for the SAS Viya environments. For example, perhaps the production workload is more controlled and understood in terms of its resource demand profile. It is predictable in terms of the required CPUs/cores & RAM to complete the workloads for a given SLA. The discovery workload is more variable and needs larger nodes in terms of CPUs/cores & RAM to support the variability of the processing.

 

You are focusing on the resource (capacity) requirements for each workload and cost optimization.

 

Another driver for this topology, might be the need to totally separate (isolate) the production and discovery processing, so that a “rogue” discovery job can’t impact any production processing. The workload separation may still be needed even when using the new workload orchestration features (SAS Workload Management) as the orchestration works at a namespace (SAS Viya environment) level not across multiple namespaces.

 

SAS Workload Management for SAS Viya was GA in November 2021, with Stable 2021.2.1 and Long-Term Support 2021.2.

 

Creating the cluster

In my last blog, I discussed creating a naming scheme and the recommendation not to over taint the nodes. For my testing I used the following labels and taints.

 

Node pool name Labels Taints
cas workload.sas.com/class=cas environment/prod=cas workload.sas.com/class=cas
casnonprod workload.sas.com/class=cas environment/discovery=cas workload.sas.com/class=cas
realtime workload/class=realtime workload/class=realtime
compute workload.sas.com/class=compute environment/prod=compute workload.sas.com/class=compute
comp2 workload.sas.com/class=compute environment/discovery=cas workload.sas.com/class=compute

 

As can be seen from the table above, I have only used the standard SAS taints for the CAS and compute nodes. Again, I did my testing in Azure and used the SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure GitHub project to create the cluster.

 

To confirm the configuration of the nodes, the labels that have been assigned, I used the following command to list the node labels:

 

kubectl get nodes -L workload.sas.com/class,workload/mas,environment/prod,environment/discovery

 

This gave the following output (Figure 2).

MG_2_compute_labels2.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 2. Displaying node labels.

 

To confirm the taints that have been applied use the following command:

 

kubectl get node -o=custom-columns=NODE:.metadata.name,TAINTS:.spec.taints

 

This gave the following output for my AKS cluster.

 

MG_3_compute_taints.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 3. Displaying node taints.

 

Updating the SAS Viya Configuration

In my last blog I discussed preferred scheduling versus strict scheduling, and the ability to drive pods to a node by using the node label(s). In the following examples I have used the ‘requiredDuringSchedulingIgnoredDuringExecution’ node affinity definition, this specifies rules that must be met for a pod to be scheduled onto a node.

 

As I haven’t added any environment taint to the compute nodes, both SAS Viya deployments must be updated to stop “pod drift” across both node pools. If the default configuration was used the compute pods could make use of both node pools. This wasn’t my desired state, so I updated the SAS Viya configuration for both environments.

 

In Kubernetes the world ‘drift’ is used in several contexts. For example, “configuration drift” refers to an environment in which the running cluster becomes increasingly different over time, usually due to manual changes and updates on the cluster. It can also be used to describe “container drift”, usually within a security context, which refers to detecting and preventing misconfiguration in the Kubernetes deployments.

 

In this context, “pod drift” is referring to pods that end up running on nodes that are not the target or desired location. A drift away from the target topology.

 

Controlling the use of the compute nodes is more complex than the CAS or MAS configuration. This is because the ‘sas-programming-environment’ has several components. If you look at the site.yaml you will see that the following configuration needs to be updated:

  • sas-compute-job-config
  • sas-batch-pod-template
  • sas-launcher-job-config
  • sas-connect-pod-template

I will not go into the details here, but the different ‘sas-programming-environment’ components are explained in the SAS Viya Administration documentation and this SAS Communities blog.

 

In the following examples, the patch transformers will make the following changes:

  • Remove the preferred scheduling to simplify the manifest, and
  • Add the definition in the required scheduling section for the node selection.

The discovery configuration is shown in the examples below. In all cases I have tested for the value of the environment label, but I could have just tested for the existence of the label, in this case ‘environment/discovery’.

 

This would look like the following:

 

- op: add
  path: /template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
  value:
    key: environment/discovery
    operator: Exists

 

Once you have created the patch transformers shown here, you need to update the kustomization.yaml to refer to the new configuration.

 

Compute Server (sas-compute-job-config) configuration

The following example is a patch transformer to update the sas-compute-job-config PodTemplate.

 

# Patch to update the sas-compute-job-config pod configuration
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: set-compute-job-label
patch: |-
  - op: remove
    path: /template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution
    value:
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: In
            values:
            - compute
          matchFields: []
        weight: 100
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: NotIn
            values:
            - cas
            - connect
            - stateless
            - stateful
          matchFields: []
        weight: 50
  - op: add
    path: /template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
    value:
      key: environment/discovery
      operator: In
      values:
      - compute

target:
  kind: PodTemplate
  version: v1
  name: sas-compute-job-config

 

To view the changes made I used ‘icdiff’ to compare the default configuration (site.yaml) and the new configuration that was produced (compute-job-site.yaml). This is shown in Figure 4.

 

MG_4_compute_job_label.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 4. Review the compute server change.

 

As can be seen the preferred scheduling section has been removed (shown in red) and the new update for the required scheduling is shown in green.

 

SAS Batch Job (sas-batch-pod-template) configuration

The following example is a patch transformer to update the sas-batch-pod-template PodTemplate.

 

# Patch to update the sas-batch-pod-template configuration
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: set-batch-compute-label
patch: |-
  - op: remove
    path: /template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution
    value:
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: In
            values:
            - compute
          matchFields: []
        weight: 100
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: NotIn
            values:
            - cas
            - connect
            - stateless
            - stateful
          matchFields: []
        weight: 50
  - op: add
    path: /template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
    value:
      key: environment/discovery
      operator: In
      values:
      - compute

target:
  kind: PodTemplate
  version: v1
  name: sas-batch-pod-template

 

Once again, to view the changes made I used ‘icdiff’ to compare the default configuration (site.yaml) and the new configuration that was produced (batch-site.yaml). This is shown in Figure 5.

 

MG_5_batch_compute_label.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 5. Review the batch job change.

 

Again, you can see the deletion in red and the additional configuration in green.

 

SAS Launcher Job (sas-launcher-job-config) configuration

The following example is a patch transformer to update the sas-launcher-job-config PodTemplate.

 

# Patch to update the sas-launcher-job-config pod configuration
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: set-launcher-job-label
patch: |-
  - op: remove
    path: /template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution
    value:
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: In
            values:
            - compute
          matchFields: []
        weight: 100
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: NotIn
            values:
            - cas
            - connect
            - stateless
            - stateful
          matchFields: []
        weight: 50
  - op: add
    path: /template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
    value:
      key: environment/discovery
      operator: In
      values:
      - compute

target:
  kind: PodTemplate
  version: v1
  name: sas-launcher-job-config

 

Connect Server (sas-connect-pod-template) configuration

The following example is a patch transformer to update the sas-connect-pod-template PodTemplate.

 

# Patch to update the sas-connect-pod-template pod configuration
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: set-connect-template-label
patch: |-
  - op: remove
    path: /template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution
    value:
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: In
            values:
            - compute
          matchFields: []
        weight: 100
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: NotIn
            values:
            - cas
            - connect
            - stateless
            - stateful
          matchFields: []
        weight: 50
  - op: add
    path: /template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
    value:
      key: environment/discovery
      operator: In
      values:
      - compute

target:
  kind: PodTemplate
  version: v1
  name: sas-connect-pod-template

 

Verifying the configuration

After both environments were running, I started four SAS Studio sessions and then used Lens to confirm that the compute server pods were running on the correct nodes. This is illustrated in the figure 6.

 

MG_6_launcher_jobs_overview2.png

Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.


Figure 6. Verifying the configuration.

 

If you look closely you will see there are three SAS Studio sessions in the discovery environment (namespace). This is shown by the three pods running on the ‘ask-comp2-3018…’ node, and there is one production SAS Studio session running on the ‘ask-compute-301…’ node. (Remember that SAS Compute Servers run as “sas-launcher-“ pods and here we’re looking at those Controlled By “Job”, not “ReplicaSet”)

 

Conclusion

Here we have looked at some of the drivers for using separate node pools for the compute pods and seen how to implement this (with the ‘compute’ and ‘comp2’ node pools) for two SAS Viya environments.

 

The examples shown above rely on updating both SAS Viya deployments, as I haven’t created a custom taint for the new ‘comp2’ node pool. If you wanted to keep the production deployment as “vanilla” as possible the minimum approach would be to add an environment taint to the new compute (comp2) node pool for the discovery deployment.

 

But remember if you use preferred scheduling you could end up with “pod drift” across all the available compute node pools unless you add additional taints to keep unwanted pods away.

 

Using the four patch transformers it would be possible to optimize, or tailor, the deployment further to meet the customer’s specific needs, allowing the different pod types to use specific node pools (node types).

 

Finally, if you were to share a single compute node pool for multiple SAS Viya environments the node pool must be sized appropriately to support the workload for all the SAS Viya environments.

 

This doesn’t just mean selecting the right instance type (node size), but you should also focus on elements such as the number of nodes in the node pool (min and max values) and setting the "max_pods" value for the nodes. Setting the ‘max_pods’ can help to stop the nodes from getting overloaded, but may mean you incur higher costs for running the Kubernetes cluster.

 

This may need some tuning once you understand the workloads and system performance.

 

I hope this is useful and thanks for reading.

Comments

Hi Michael,

 

Appreciate the detail you're putting into these posts & they will certainly be useful out in the real world.

 

Can I pose a question? 

 

I understand that the 'Connect Node' is used for 'receiving' connections into SAS Viya 4 from SAS 9.4 / SAS Viya 3.5 - is that correct? If so, if you have a installation that's solely SAS Viya 4 does this make this node redundant? Does it do anything else?  Which then leads me to wonder what the best way is to remove it from the configuration/installation if that is the case.

 

However, if it does need to exist, could it be added to the Compute Node as they share a similar purpose?

 

Ok, that's several questions, but hopefully you follow my thoughts 🙂

 

Thanks

 

Alan 

Hi Alan, good questions. You are right that SAS/CONNECT is used for sessions from SAS 9 and other SAS Viya environments. If you only have a single SAS Viya environment and there is no requirement for sessions from other SAS environments, then yes, having a connect node pool is not needed.


That topology shown is for a "fully" scaled out deployment, or what I call "separation by tier". In fact unless you have a lot of SAS/CONNECT sessions using the spawner I would just have a 'Compute' node pool to support the SAS/CONNECT and Compute (sas-programming-environment) pods.

 

For your environment, a three node pool topology is probably fine. That is, a shared node pool for the stateless and stateful pods, a shared node pool for Connect and Compute pods, and a CAS node pool.

To implement this you need to create a patch transformer to update the 'sas-connect-spawner' pod to use the Compute node pool.

 

Below is an example (that uses strict scheduling).

 

I hope that helps.

 

# This transformer changes the sas-connect-spawner pod to run on the compute nodes
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: add-compute-label
patch: |-
  - op: remove
    path: /spec/template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution
    value:
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: In
            values:
            - connect
          matchFields: []
        weight: 100
      - preference:
          matchExpressions:
          - key: workload.sas.com/class
            operator: NotIn
            values:
            - compute
            - stateless
            - stateful
          matchFields: []
        weight: 50
  - op: add
    path: /spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/-
    value:
      key: workload.sas.com/class
      operator: In
      values:
        - compute
  - op: replace
    path: /spec/template/spec/tolerations
    value:
      - effect: NoSchedule
        key: workload.sas.com/class
        operator: Equal
        value: compute
target:
  kind: Deployment
  name: sas-connect-spawner

 

Thanks Michael, exactly what I was after 😊

Version history
Last update:
‎03-13-2022 01:34 PM
Updated by:
Contributors

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags