SAS Container Runtime (SCR) allows you to publish a SAS model as a container image in Azure. SAS Container Runtime (SCR, pronounced "soccer") and Azure as a publishing destination is available as of since the SAS Viya version 2021.1.3. The post details all the configuration steps in Azure and SAS to publish a SAS model as an Azure container image and validate the publishing.
SAS Container Runtime (SCR) benefits:
Graphical background, SAS Viya on Azure source: SAS Research & Development.
Remarks:
kaniko-transformer.yaml kustomization.yaml podtemplate.yaml storage.yaml
from your deployment folder:
# export PRJ=fill-your-path-to-deployment-folder-here
ls $PRJ/sas-bases/examples/sas-model-publish/kaniko/
# 1. Using the files in the `$PRJ/sas-config/sas-model-publish/kaniko` directory
echo "kaniko folder permissions"
cd ~
chmod -R 755 $PRJ/site-config/sas-model-publish/kaniko
# 2. Modify the parameters in the podtemplate.yaml file, if you need to implement customized requirements, such as the location of Kaniko image.
# Nothing to change
# 3. Modify the parameters in storage.yaml. For more information about PersistentVolume Claims (PVCs), see [Persistent Volume Claims on Kubernetes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
# Azure storage: 4Gi should be enough ~ 10 SCR models at the same time
# storageClassName: sas-azurefile
# You can find out the Azure storage class by checking other PVCs e.g. kubectl get pvc -n ${current_namespace} -o wide | { head -1; grep "cas"; }
echo "Update storage.yaml"
cd ~
cd $PRJ/site-config/sas-model-publish/kaniko/
echo "Update the Azure storage capacity '{{ STORAGE-CAPACITY }}'"
sed -i 's/{{ STORAGE-CAPACITY }}/4Gi/' ./storage.yaml
echo "Update the storage class name"
sed -i 's/{{ STORAGE-CLASS-NAME }}/sas-aks/' ./storage.yaml
echo "List changes in storage.yaml"
cat storage.yaml
cd ~
# 4. Make the following changes to the base kustomization.yaml file in the $deploy directory.
echo "Remove readme.md file"
cd ~
cd $PRJ/site-config/sas-model-publish/kaniko/
rm -Rf *.md
echo "Backup kustomization.yaml first"
cd ~
cd $PRJ
cp kustomization_template.yaml kustomization_template_before_kaniko.yaml
# * Add site-config/sas-model-publish/kaniko to the resources block.
# ```yaml
# resources:
# - site-config/sas-model-publish/kaniko
echo "Add Kaniko to resources"
cd ~
printf "
- command: update
path: resources[+]
value:
site-config/sas-model-publish/kaniko # 2021.1.3 Kaniko mount patch
" | yq -I 4 w -i -s - $PRJ/kustomization_template.yaml
# * Add sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml to the transformers block.
# transformers:
# - sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml
echo "Add Kaniko to transformers"
cd ~
printf "
- command: update
path: transformers[+]
value:
sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml # 2021.1.3 Kaniko transformer patch
" | yq -I 4 w -i -s - $PRJ/kustomization_template.yaml
cd ~
echo "Kaniko patch end"
# 1. Verify pvc
kubectl get pvc | grep kan
# You should see
# sas-model-publish-kaniko Bound pvc-91494c0d-07c1-466e-b45a-925934a72ae2 4Gi RWX sas-aks 102m
# 2. Run the following command to verify whether the overlays have been applied:
kubectl get pods | grep sas-model-publish
# You should see something similar to
# sas-model-publish-5d96cd6f44-22h5m 1/1 Running 0 28m
# get the name of the pod an add it in the command below
kubectl describe pod | grep models
# 3. Verify that the output contains the following mount directory paths:
# /models/kaniko from kaniko (rw)
# Check you have the AZ CLI
az version
# Customize variables
RG=MYSCR # choose a name for your resource group
LOCATION=eastus # change it to suit your needs
ACR=sascontainers # choose a name for your Azure Container Registry
# Create an Azure resource group RG
az group create --name $RG --location $LOCATION
# Create an Azure Container Registry ACR
az acr create --resource-group $RG --name $ACR --sku Basic --location $LOCATION
Before sharing, it is best to check that your notebooks, scripts, documents or external git projects do not contain any confidential elements or credentials. Keep your resources and the company safe.
With the Azure app created, assign it roles to push and pull images in the Azure Container Registry. Go to your Azure Container Registry > Access control > Role assignments > Add > Search the app you registered and add the roles:
RESOURCEGRP=sasuser-azuredm-rg # choose the resource group name where the AKS cluster is deployed
# find the internal AKS resource group
AKS_RG=$(az aks list -g $RESOURCEGRP --query [].nodeResourceGroup -o tsv)
echo "AKS resource group: $AKS_RG"
# find the LB IP address of the AKS cluster (outbound)
OUTBOUND_IP=$(az network public-ip list --query "[].{tags: tags.type, address: ipAddress}" -o tsv -g $AKS_RG | grep aks-slb | cut -f2)
echo "AKS outbound IP: $OUTBOUND_IP"
# find the IP address of the AKS cluster (inbound)
INBOUND_IP=$(az network public-ip list --query "[].{tags: tags.type, address: ipAddress}" -o tsv -g $AKS_RG | grep None | cut -f2)
echo "AKS inbound IP: $INBOUND_IP"
# Get NSG of AKS resource group
AKS_NSG=$(az network nsg list -g $AKS_RG --query [].name -o tsv)
echo "AKS NSG: $AKS_NSG"
# Create inbound nsg rule
az network nsg rule create -g $AKS_RG --nsg-name $AKS_NSG -n AllowNodePortRange \
--priority 100 \
--source-address-prefixes $OUTBOUND_IP/32 \
--source-port-ranges '*' \
--destination-address-prefixes $INBOUND_IP \
--destination-port-ranges '30000-32767' --access Allow \
--protocol Tcp --description "Allow access to pods via nodeport"
# Rule is created in aks-agentpool-59212252-nsg
# from Load Balancer IP of AKS cluster (OUTBOUND IP) to IP address of the AKS Cluster (INBOUND IP).
# Integrate an existing ACR with existing AKS clusters by supplying valid values for acr-name or acr-resource-id as below.
RESOURCEGRP= sasuser-azuredm-rg
VALIDATE_AKS_CLUSTER=sasuser-azuredm-aks
ACR_NAME=sascontainers
az aks update -n $VALIDATE_AKS_CLUSTER -g $RESOURCEGRP --attach-acr $ACR_NAME
# login with the sas-viya cli profile already created
cd ~
export SAS_CLI_PROFILE=${namespace}
export SSL_CERT_FILE=~/.certs/${namespace}_trustedcerts.pem
sas-viya -k auth login --user sasuser --password **********
# Use the CLI - invoke the help
sas-viya --profile ${SAS_CLI_PROFILE} models destination createAzure --help
# define the needed variables
NS=$GELENV_NS # namespace
INGRESS_FQDN=${PREFIX}.gelenablesyou.sas.com # SAS Viya URL
SASUSER=sasadmin@gelenablesyou.sas.com # SAS User
DOMAIN_NAME="AzureDomain" # credentials domain name
ACR_DEST_NAME="Azure" # destination name
ACR_DEST_DESC="Azure SCR"
ACR_SERVER=${PREFIXNODASH}acr.azurecr.io # the Azure Container Registry you will publish to
AKS_NAME=$PREFIX-aks # the SAS AKS cluster where you will perform publishing validation. You integrated it with the ACR.
# list variables which will be used
printf "\nThe destination will be created with these parameters \n"
export ACR_DEST_NAME_CLI=AzureCLI
echo "Name: $ACR_DEST_NAME_CLI"
echo $INGRESS_FQDN
echo $PREFIX && echo $PREFIXNODASH
echo ${PREFIXNODASH}acr
echo "baseRepoURL: $ACR_SERVER"
echo "Subscription: $SUBSCRPTN"
echo "Tenant: $TENANTID"
echo "Region: $LOCATION"
echo "Cluster Name: $PREFIX-aks"
echo "Resource group: $RG"
echo "Identity: $SASUSER"
echo "App Client ID: $APP_CLIENT_ID" # app registration client id
echo "App Client Secret: $APP_CLIENT_SECRET" # app registration client secret
sas-viya --profile ${SAS_CLI_PROFILE} models destination createAzure --name ${ACR_DEST_NAME_CLI} --description "ACR with SAS Viya CLI" --baseRepoURL ${ACR_SERVER} --subscriptionId ${SUBSCRPTN} --tenantId ${TENANTID} --region ${LOCATION} --kubernetesCluster ${PREFIX}-aks --resourceGroupName ${RG} --credDomainID "ACRCredDomainCLIRomeo" --credDescription "Azure ACR credentials CLI Romeo" --clientId ${APP_CLIENT_ID} --clientSecret ${APP_CLIENT_SECRET} --identityType user --identityId ${SASUSER}
It was a long journey, but by this time you learnt how to configure Azure as a publishing validation for SAS models, how to publish a model to Azure and how to validate the publishing.
Now that your models have been published as container images in Azure, you could choose to stop the SAS Viya deployment in Azure, to save cloud costs. To score the published SAS models, you no longer require all the SAS resources and the SAS Viya Azure Kubernetes cluster up and running. You can deploy your container in a number of ways and score the model. Read more about it in future posts.
SAS Container Runtime is certainly an important step allowing us to reimagine how we can work with SAS Viya in the cloud. Models can now be shipped outside SAS Viya as "containerized intelligence". To paraphrase Neil Armstrong, SCR is one small step for SAS, one giant leap for the SAS customers.
This post builds on very good work done previously by:
Thank you for your time reading this post. If you found it useful, like it. Please comment and tell us what you think about the new SCR and Azure publishing destination.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9. Sign up by March 14 for just $795.
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.