In this blog I’d like to discuss a practical approach to implementing ModelOps by leveraging SAS Viya’s robust Container Runtime framework (SCR) in combination with OpenShift GitOps.
When thinking about implementing “ModelOps”, you’d primarily think about the set of practices and assorted tools used for implementing the automated workflows that are needed for operationalizing analytical models. However, ModelOps has a non-technical side to it as well, as there are organizational and governance aspects to keep in mind. And finally, for a ModelOps strategy to be successful, usability must remain a key focus as well. While CI/CD tools are inherently complex - especially when deployed on Kubernetes platforms – the goal should still be to enable business users (the internal “customers”) to operate independently without requiring deep expertise in Kubernetes. By empowering these users, we aim to eliminate IT operations as a potential bottleneck, thereby accelerating business processes and enhancing overall efficiency.
The ModelOps approach described in this blog is based on OpenShift GitOps, which is built on Argo CD, one of the most commonly used Continuous Integration and Continuous Delivery (CI/CD) tools for managing the lifecycle of deployments on Kubernetes. As previously noted, emphasis is placed on finding a solution that provides effective automation and high usability at the same time. Let’s start by taking a closer look at Argo CD.
Argo CD (“Argo Continuous Delivery”) is a well-known continuous delivery tool for Kubernetes. It is used to manage and automate the deployment of applications to Kubernetes clusters which makes it a core utility for any ModelOps architecture. Argo CD typically uses a Git repository as the “source of truth”. In its standard configuration, it monitors a Git repository for changes and synchronizes any detected modifications with Kubernetes. The diagram below illustrates this basic process:
At the core of Argo CD is the concept of “Applications”, which in short represents a Kubernetes deployment that is managed by Argo CD. An “Application” is defined as a Kubernetes custom resource. Here’s a simple example:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
spec:
source: # where to find the manifests
repoURL: https://github.com/example/repo
path: manifests
targetRevision: main
destination: # what cluster to deploy to
server: https://kubernetes.default.svc
namespace: default # what namespace to deploy into
syncPolicy: # how to sync and
automated: # manage the lifecycle
prune: true
selfHeal: true
Given this context, we can already see the first and most simple option of how to design a ModelOps architecture: why not just ask the business teams to create and manage these manifests on their own? Although this method is straightforward, it is certainly not intuitive for business users and can be error-prone.
In this blog, I’ll focus on a different strategy, but I wanted to mention this pattern since it is quite popular in the ArgoCD world. The “App of Apps” pattern allows you to organize your deployments hierarchically. As the name suggests, this pattern uses a “parent” application which uses a Git repository as it’s source where additional “child” applications are stored:
This approach provides some advantages. Most importantly, it is well suited for multi-team setups as it is very scalable and distributes responsibilities for the individual applications to the authoring teams. However, on the downside, this pattern can introduce complexity when debugging issues across multiple layers of applications and navigating between parent and child apps in the Argo CD UI can become cumbersome - especially as the number of applications grows.
Additionally, your business users will still be tasked with writing and maintaining application manifests, even though they may not fully understand the underlying Kubernetes or GitOps concepts, leading to potential misconfigurations and increased dependency on platform teams.
Enter ApplicationSets: these are advanced configuration objects which are best described as “application generators”. To put it in another way: ApplicationSets process configuration data, utilizing it as input parameters to generate new Applications from predefined templates. Following that concept, each ApplicationSet defines a “generator” and a “template” section. In our setup, a Git repository acts as the generator, while Helm serves as the templating engine.
From a usability perspective, the big advantage of ApplicationSets is that they can be configured to use simple JSON input files and there is no predefined structure for the content of these JSON files. This becomes especially powerful if combined with templates that are based on Helm charts.
Let’s take a closer look at what is needed for deploying an SCR image (using helm) first before continuing to discuss the Argo CD configuration.
When you publish an analytical model or decision flow from SAS Viya using the SAS Container Runtime (SCR) destination, either using SAS Model Manager, SAS Decisioning or even Python code, all resources which are required for execution of the model are packaged into the container image, which makes deploying the model (to a Kubernetes cluster or a container runtime platform like AWS ECS for example) very simple.
However, the truth is that nothing is “really simple” when it comes to Kubernetes … You still must provide a lot of deployment manifests to make it happen:
Kubernetes Resource |
Used for … |
Deployment |
Ensures the desired number of app instances (pods) are running and updated |
Service |
Exposes a pod as a network service, enabling stable access via a DNS name and load balancing |
Route or Ingress |
Provides external access to services in the cluster |
Image Pull Secret |
Stores credentials for pulling container images |
Ingress Certificate |
Provides TLS encryption for Ingress or Route resources |
CA Certificates |
Trusted certificate authorities used to validate TLS certificates |
Luckily, deployments of SCR container images are very similar to each other. If you’ve done it right one time, it’s easy to repeat. This of course is an invitation to invest some time in creating a Helm chart because it provides a very powerful templating mechanism that can be used to create individual deployment manifests by just providing a few input parameters like the SCR image container location etc. The exact details of how to create a Helm chart are out of scope for this blog since the process is simple and there is a lot of documentation available on the internet.
In short summary, a good starting configuration for your Helm chart could look like this (I have attached this example to the blog):
sas-helm-develop/
├── helm-sas-scr-deploy
│ ├── Chart.yaml # metadata for this chart
│ └── templates # folder containing manifest templates
│ ├── deployment.yaml
│ ├── NOTES.txt
│ ├── registry-image-pull-secret.yaml
│ ├── route.yaml
│ ├── sas-scr-ingress-certificate.yaml
│ └── service.yaml
├── index.yaml # metadata for chart repository
├── sas-scr-helm-1.0.0.tgz # versioned Helm packages
├── sas-scr-helm-1.0.1.tgz
├── sas-scr-helm-1.0.2.tgz
└── test # for testing the chart locally
└── values.yaml
To get a basic understanding how the templating mechanism works, look at this snippet from the Deployment YAML template:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
imagePullSecrets:
- name: registry-image-pull-secret-{{ .Values.model.name }}
containers:
- name: {{ .Values.model.name }}
image: {{ .Values.deploy.imageurl }}
You can see that it expects 2 input parameters, named “model.name” and “deploy.imageurl” (you can ignore the “values” prefix). After packaging and distributing the chart, you typically set these parameters on the command line or in a “values.yaml” input file when executing the helm binary, as shown below:
model:
name: test-model
deploy:
imageurl: your-container-registry.local/sas-model-images/your-model:1.0
Once you’re finished preparing the templates, you need to package them to a chart tar ball, create the chart catalog (index.yaml) and upload both to a location from where it can be installed, e.g. to a web server:
# create package and transfer it to web server
helm package ./helm-sas-scr-deploy/
scp sas-helm-develop/*.tgz user@web-server:/var/www/html/helm-repository/
# create index file and transfer it to web server
helm repo index . --url=https://web-server/helm-repository/
scp sas-helm-develop/index.yaml user@web-server:/var/www/html/helm-repository/
To check if this has worked, you can use the “dry-run” option:
helm repo add sas-scr-helm https://web-server/helm-repository/
helm repo update
helm install scr-test sas-scr-helm/sas-scr-helm --dry-run -f test/values.yaml
With the helm chart in place (as the ApplicationSet’s template), it’s time to return to Argo CD. Let’s look at how the Git repository, drives the generation of new applications, could be set up. Keep in mind that the information in this Git repository is supposed to be managed by business users, not by IT professionals. Hence it should focus on the most relevant information and leave out all technical details. Here’s an example for a Git repository named “sas-model-deploy-assets”, which is using fictional banking business teams:
sas-model-deploy-assets/
├── models
│ ├── corequant
│ │ ├── coresim-config.json
│ │ └── liquimodel-config.json
│ ├── opsintel
│ │ └── sigmatrack-config.json
│ ├── riskgrid
│ │ ├── horizonmap-config.json
│ │ ├── riskdna-config.json
│ │ └── stressline-config.json
│ └── secureflow
│ └── datasentry-config.json
└── README.md
As illustrated, there are four fictional business teams, each with several analytical models they intend to have deployed. Each model is described by a separate configuration file, following a naming scheme: <team><model>-config.json. Looking at one of these configuration files, you’ll recognize the Helm parameters we talked about before, along with a few others (check the attached Helm chart sources to see how they are actually used):
{
"model": {
"name": "stressline",
"author": "u12345",
"team": "riskgrid"
},
"deploy": {
"imageurl": "your-container-registry.local/sas-model-images/your-model:1.0",
"namespace": "default"
},
"network": {
"hostname": "riskgrid-stressline.apps.openshift.cluster.local",
"urlsubpath": "/score"
},
"options": {
"replicas": "3",
"debuglevel": "INFO"
}
}
Again, this is just given as an example – there is a decision to make how much of the deployment infrastructure you want to expose to business users. It is probably the best approach to keep this as trimmed down as possible. From that perspective, you might want to keep parameters such as “namespace” or “hostname” internal actually ...
By now we’ve seen both ends of the story: the JSON input and the template generating the final output YAML manifests. Let’s look at how this will be stitched together by Argo CD. Here’s the Argo CD ApplicationSet:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: sas-scr-applications
namespace: openshift-gitops
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators: # (1)
- git:
repoURL: https://your-git-server.local/sas/sas-model-deploy-assets.git
revision: HEAD
requeueAfterSeconds: 10
files:
- path: "models/**/*-config.json" # (2)
template: # (3)
metadata:
name: "{{.path.basename}}-{{normalize .model.name}}-app"
namespace: openshift-gitops
spec:
project: default
source: # (4)
chart: sas-scr-helm
repoURL: https://web-server/helm-repository/
targetRevision: 1.0.2
helm:
values: | # (5)
model:
name: "{{normalize .model.name}}"
author: "{{normalize .model.author}}"
team: "{{normalize .model.team}}"
deploy:
imageurl: "{{.deploy.imageurl}}"
namespace: "{{.deploy.namespace}}"
network:
hostname: "{{.network.hostname}}"
urlsubpath: "{{.network.urlsubpath}}"
options:
replicas: "{{.options.replicas}}"
debuglevel: "{{.options.debuglevel}}"
destination: # (6)
server: https://kubernetes.default.svc
namespace: 'default'
syncPolicy:
automated:
prune: true
selfHeal: true
To provide more details:
Think of the ApplicationSet looping through all JSON files it finds on Git and creating (or updating) an Application for each file. Based on the example given before, your Argo CD UI will show a separate Application for each model once it has fully synced with the Git repository:
And each Application deploys all manifests needed for the SCR container image:
ApplicationSets are Kubernetes custom resources and can be created, updated and deleted using regular kubectl or oc commands. At present, unlike the Applications they generate, these entities are not yet visible in the Argo CD UI. However, you can manage them using the Argo CD CLI:
$ argocd appset get sas-scr-applications
Name: openshift-gitops/sas-scr-applications
Project: default
Server: https://kubernetes.default.svc
Namespace: default
Source:
- Repo: https://web-server/helm-repository/
Target: 1.0.2
SyncPolicy: Automated (Prune)
CONDITION STATUS MESSAGE LAST TRANSITION
ErrorOccurred False Successfully generated para... 2025-10-09 08:06:43
ParametersGenerated True Successfully generated para... 2025-10-08 20:00:58
ResourcesUpToDate True ApplicationSet up to date 2025-10-09 08:06:43
And if you’re using Red Hat OpenShift, there is one additional option using the API Explorer:
ApplicationSets support the Separation of Concerns principle. They eliminate the need for end users to possess elevated permissions when deploying applications, while also abstracting the specifics of cluster infrastructure from their view.
However, using Argo CD alone does not guarantee security, because as a pure CD tool, Argo CD does not offer ways to validate the user’s input. For example, a malicious user might try to inject a non-SCR container image through the JSON file. For added security, consider using a pipelining tool like Jenkins or Tekton (OpenShift Pipelines in OpenShift) to pre-scan user input.
Aside from that, there is actually one easy safeguard option which you should implement: set up branch protection rules for the Git repository to prevent end users from committing directly to the main branch in Git and enforce the use of pull requests. Since the ArgoCD ApplicationSet can be configured to only monitor the main branch, nothing happens on the cluster before you (as admin) approve the pull requests.
In this blog I’ve discussed using Argo CD ApplicationSets for setting up a ModelOps infrastructure. Using end user friendly JSON input data and a self-written Helm chart for deploying SCR container images, this approach allows for scalability while keeping workload complexity at a manageable level at the same time. As we’ve seen, Argo CD ApplicationSets provide a great way to allow your business users full control over the lifecycle of their models and decision flows without requiring them to become Kubernetes experts or Argo CD cracks.
I hope you found this blog useful and please let me know if you have questions or other feedback.
It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates.
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.