With SAS Container Runtime (SCR) you can publish a SAS rule set, decision, or a model in a container image. You can deploy the SCR image to Azure Kubernetes Service (AKS). AKS simplifies the implementation of Kubernetes clusters. It also provides all the orchestration features you need to manage cloud-native applications.
The three-part series shows you how to create and set up a simple AKS cluster, deploy and expose your scoring container, through a deployment, a service and an ingress. Finally, you will learn how to score the SAS decision or the model.
In this post, the focus is on the set-up of the AKS cluster.
Imagine you work for a growing company that provides a cloud-based scoring service. The service is experiencing increased customer demand. Management tasked you to assess which Azure service would be appropriate to meet the demand.
You company is using SAS Viya. With SAS Container Runtime (SCR), you can now publish a decision or a model as a container image.
You have identified Azure Kubernetes Service (AKS) as a potential deployment solution. AKS allows you to deploy your container image and handle the increasing demand. You want to understand how to deploy your SAS decision to an AKS cluster and allow access to the scoring services.
To replicate the example in this post, you will require:
As a first step, you will need to create an AKS cluster to meet the demand of your many customers using the scoring service.
You decide to use the AKS single control plane and multiple nodes architecture. This is because it provides the best way to create and manage workload resources.
AKS supports both Linux and Windows node pools. As SAS Viya is on Linux, Linux is the natural choice.
Go to your Azure Subscription, sign into the Azure portal.
Open Azure Cloud Shell.
Create the following shell variables: resource group, cluster name and a location close to you.
# Change SASUserID with your User ID or a short string of your choice
export RESOURCE_GROUP=SASUserID-AKS
export CLUSTER_NAME=SASUserID-score-aks
export LOCATION=eastus
# Choose a location closer to you. List locations with:
# az account list-locations -o tsv
echo $CLUSTER_NAME
echo $RESOURCE_GROUP
echo $LOCATION
Create a new resource group.
az group create -l $LOCATION -n $RESOURCE_GROUP
Run the az aks create
command to create an AKS cluster in the resource group.
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--node-count 2 \
--enable-addons http_application_routing \
--generate-ssh-keys \
--node-vm-size Standard_B2s \
--network-plugin azure
The cluster will have two nodes defined by the --node-count
parameter. We're using only two nodes here for cost reasons. These nodes will be part of System node. The --node-vm-size
parameter configures the cluster nodes as Standard_B2s-sized Virtual Machines.
The HTTP application routing add-on is enabled via the --enable-addons
flag. The HTTP application routing solution makes it easy to access applications that are deployed to your AKS cluster.
When the add-on is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
Caution
The HTTP application routing add-on is designed to let you create quickly an ingress controller and access your applications. This add-on is not currently designed for use in a production environment and is not recommended for production use. For production-ready ingress deployments that include multiple replicas and TLS support, see Create an HTTPS ingress controller.
Run the az aks nodepool add
command to add an additional Linux node pool to an existing AKS cluster.
az aks nodepool add \
--resource-group $RESOURCE_GROUP \
--cluster-name $CLUSTER_NAME \
--name userpool \
--node-count 2 \
--node-vm-size Standard_B2s
This new node pool can be used to host applications and workloads, instead of using the System node pool.
Run the following command in Azure Cloud Shell.
az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP
This command will add an entry to your ~/.kube/config
file, which holds all the information to access your clusters. Kubectl enables you to manage multiple clusters from a single command-line interface.
Run the kubectl get nodes
command to check that you can connect to your cluster:
kubectl get nodes
You should receive a list of four available nodes in two node pools.
NAME STATUS ROLES AGE VERSION
aks-nodepool1-38791398-vmss000000 Ready agent 115m v1.20.9
aks-nodepool1-38791398-vmss000001 Ready agent 115m v1.20.9
aks-userpool-38791398-vmss000000 Ready agent 111m v1.20.9
aks-userpool-38791398-vmss000001 Ready agent 111m v1.20.9
The first step was to create an Azure Kubernetes Service cluster, add a node pool that will host our workload and link the Kubernetes cluster with kubectl.
Read the next post in the series, to find out how to deploy a containerized SAS model or decision, as a Kubernetes workload, using YAML files.
Thank you for your time reading this article. If you liked the article, give it a thumbs up! Please comment and tell us what you think about SAS Container Runtime and deployment to AKS.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.