In this post we will look at running SingleStore Studio in the Kubernetes cluster, within the SAS Viya namespace. As some context, I’m talking about SAS with SingleStore deployments (orders). The SingleStore tools and SingleStore Studio are not shipped as part of the SAS order, and are usually installed on a machine external to the Kubernetes cluster.
There are many benefits from running SingleStore Studio (S2 Studio) within the SAS Viya namespace. But there are also some challenges, namely SingleStore do not provide a standalone container image for deploying SingleStore Studio. Note, the SingleStore documentation also uses the term SingleStoreDB Studio.
Here we will look at creating a container image to run the SingleStore Client (command-line) and S2 Studio, and deploying it to the SAS Viya namespace.
I would like to start by saying that SingleStore do provide an image containing S2 Studio, it is in the ‘singlestore/cluster-in-a-box’ image. As the name suggests this image contains a complete environment, which is targeted at developers.
SingleStore have several images on Docker Hub, see: https://hub.docker.com/u/singlestore. But they do not provide an image for just running S2 Studio, nor do SAS include this as part of the SAS Viya with SingleStore order.
As some background, with a SAS with SingleStore order, all the SingleStore components, the memsql cluster runs within the Viya namespace.
Let’s start by discussing the benefits of running S2 Studio on Kubernetes, within the Viya namespace.
The key benefits of running S2 Studio in the Kubernetes cluster are simplified networking and security, as the S2 Studio server application is connecting directly to the SingleStore services running within the SAS Viya namespace.
However, for a secure connection to the SingleStore cluster a WebSocket Proxy implementation is used. This means that a direct connection from the user’s browser to the backend is required. I will talk more about this in a follow-up post on enabling TLS security for the S2 Studio application.
The SingleStore documentation states the following:
“For situations where REQUIRE SSL is not mandatory, and if the additional configuration required to use a direct WebSocket connection becomes a bottleneck, it may be simpler to use the existing Studio architecture, where Studio is served over HTTPS and the singlestoredb-studio server is co-located with the Master Aggregator.”
The REQUIRE SSL attribute is a memsql user setting.
Therefore, running the singlestoredb-studio server within the Viya namespace effectivity collocates it with the memsql cluster, the Master Aggregator. The communication over port 3306 (which is unencrypted) is contained to within the Kubernetes cluster, and not exposed to the outside world.
The SingleStoreDB Studio Architecture page also states that multiple S2 Studio instances can communicate with an individual cluster, you can easily scale out S2 Studio by creating new instances to manage user load. Hence, running S2 Studio as a Kubernetes deployment is another advantage of running it on Kubernetes, rather than being installed on a host machine outside of the K8s cluster.
To run S2 Studio on Kubernetes you first need to build a container image. For this you need to select an image that contains the base packages for S2 Studio to run. This became a process of research (looking at what SingleStore were using for their images) and trial and error. The Centos image works well and contains utilities like systemctl, but the image ends up being very large at over 600MB.
In the end I settle on the almalinux/8-init as my base. The nice thing about this and the Centos image, is that it allowed for the standard install process for the SingleStore Client (CLI) and Studio, to build the container image.
Remember, when selecting an OS image for the container build it is important to do the due diligence on the security of that image, can it be trusted.
You must create your own Docker build file (Dockerfile), the following image shows my build file. As mentioned above, I decided to build an image that contained the SingleStore CLI and Studio.
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
In the image, lines 3 to 6 install and update the required packages. Once that is in place the SingleStore Studio and CLI are installed (lines 8 to 13). Line 10 sets the permissions on the ‘singlestoredb-studio.hcl’ configuration file. This is required as the install runs under root, while the container will run as the memsql user (this is set on line 16).
In lines 18 – 20 I added several labels for the image. Lines 22 and 23 show the ports that are exposed. Note, I could have also used ports 80 and 443.
Finally, line 25 starts the S2 Studio server (application), specifies the command to run within the container.
At this point I would like to acknowledge the assistance from Marc Price (Senior Principal Technical Support Engineer) in getting the Docker buildfile configuration finalised.
The next step is to build the image from the Dockerfile. The following is an example build command:
docker build --tag singlestore-tools --file singlestore-tools .
Note, it is important to include the dot at the end of the command.
This produced an image that was 479MB in size.
Once the image has been built you can use the ‘docker history’ command to review the image layers. For example.
Now that I had an image, I tested it by running it on the Docker server. For example:
docker run -d -p 8080:8080 --name singlestore-tools singlestore-tools:latest
Here you can see SingleStore Studio running on my Docker server.
Once I was happy with the image, I tagged it and pushed it to my container registry.
Now that you have an image, the next step is to create the deployment manifests. You need to create the configuration for deploying the S2 Studio application, along with a service and ingress definition. To pre-configure the ‘studio.hcl’ file a Kubernetes ConfigMap is also required.
To deploy the S2 Studio application, it is possible to deploy it as a single pod or use a Kubernetes deployment to scale the S2 Studio deployment. In this example I will show how to use a K8s deployment for S2 Studio. An overview of the configuration is shown in the diagram below.
A key decision is where should the S2 Studio application run?
In this example, it is configured to run on the Stateful nodes, nodeAffinity for the Stateful nodes. But I could have also configured it to run in the singlestore node pool, as this is where the SingleStore Master Aggregator is running.
With that decided, the next decision is how many replicas do you want to run, here I specified 2 replicas. I was testing in Microsoft Azure using an Azure Container Registry.
---
# singleStore-tools deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: singlestore-tools
workload.sas.com/class: singlestore
name: singlestore-tools
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: singlestore-tools
template:
metadata:
labels:
app: singlestore-tools
app.kubernetes.io/name: singlestore-tools
workload.sas.com/class: singlestore
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.azure.com/mode
operator: NotIn
values:
- system
- key: workload.sas.com/class
operator: In
values:
- stateful
containers:
- image: myregistry.azurecr.io/singlestore-tools:latest
imagePullPolicy: Always # IfNotPresent or Always
name: s2tools
resources:
requests: # Minimum amount of resources requested
cpu: 1
memory: 128Mi
limits: # Maximum amount of resources requested
cpu: 2
memory: 256Mi
ports:
- containerPort: 8080 # The container exposes this port
name: http # Name the port "http"
volumeMounts:
- name: studio-files-volume
mountPath: /tmp/s2studio-files
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
cp /tmp/s2studio-files/studio.hcl /var/lib/singlestoredb-studio/studio.hcl
tolerations:
- effect: NoSchedule
key: workload.sas.com/class
operator: Equal
value: stateful
volumes:
- name: studio-files-volume
configMap:
name: studio-files
A consideration for creating the deployment manifest is that when a ConfigMap is mounted as a volume it becomes read-only. Therefore, you can’t directly mount the studio.hcl file into the target location (as the S2 Studio server requires read-write access to the studio.hcl file).
Above you can see the ‘studio-files’ ConfigMap is mounted as the volume: ‘studio-files-volume’, with a mountPath of ‘/tmp/s2studio-files’.
So, the configMap file(s) are loaded into a temporary location, then copied into the configuration. This is achieved with the following copy command:
cp /tmp/s2studio-files/studio.hcl /var/lib/singlestoredb-studio/studio.hcl
This copies my pre-configured cluster definition, studio.hcl file, into the Studio server configuration with the required permissions.
Another consideration when deploying multiple replicas is whether to define Pod Affinity / AntiAffinity rules.
For my test environment I defined a single node pool, called services, for the Viya stateful and stateless services. It had the stateful label and taint applied to the nodes. Below you can see that while I hadn’t defined any podAntiAffinity rules, I ended up with the S2 Studio pods (singlestore-tools) running on different nodes.
To be able to access the S2 Studio application, a service and ingress definition is required. We will first look at the service definition.
---
apiVersion: v1
kind: Service
metadata:
name: s2studio-http-svc
labels:
app.kubernetes.io/name: s2studio-http-svc
spec:
selector:
app.kubernetes.io/name: singlestore-tools
ports:
- name: s2studio-http
port: 80
protocol: TCP
targetPort: 8080
type: ClusterIP
Here you can see the service definition, the service was called s2studio-http-svc, and that I have mapped port 80 to port 8080 on the container(s).
To access the S2 Studio application, I also needed a DNS name that would resolve for the S2 Studio application, the host name in the ingress definition. In my environment I had a DNS wildcard for:
*.camel-a20280-rg.gelenable.sas.com
Therefore, I used a host name of: s2studio.camel-a20280-rg.gelenable.sas.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: s2studio-ingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
app.kubernetes.io/name: s2studio-ingress
spec:
rules:
- host: s2studio.camel-a20280-rg.gelenable.sas.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: s2studio-http-svc
port:
number: 80
Here you can see the ingress is targeting service: s2studio-http-svc
Given that the S2 Studio application is running in the Kubernetes cluster with SAS Viya it is possible to use the internal service name for the memsql cluster. The key advantage of using the service name is that it keeps the connection from the S2 Studio application to the memsql cluster internal to the K8s cluster.
The service name is also a known value for a SAS Viya with SingleStore deployment, which means it is possible to pre-configure the studio.hcl file with a connection profile for the memsql cluster.
The DDL service name is: svc-sas-singlestore-cluster-ddl
The following was the ‘studio.hcl’ definition that I created.
version = 1
cluster "ViyaS2Profile" {
name = "SAS Viya DDL Connection"
description = "Connection using port 3306"
hostname = "svc-sas-singlestore-cluster-ddl"
port = 3306
profile = "DEVELOPMENT"
websocket = false
websocketSSL = false
kerberosAutologin = false
}
Once the file has been created, the following command can be used to create the ConfigMap.
kubectl -n namespace create configmap configmap_name --from-file=file_name
Note, it would have been possible to create an inline definition for the studio.hcl file in the S2 Studio deployment yaml. However, I prefer to keep this separate as it provides more flexibility and makes it easier to load (define) multiple files. We will see this in Part 2 of this post.
In my opinion it also makes it easier to create the files, as you don’t have to worry about yaml indentation. You just create the files as required.
The only consideration for this approach is that the ConfigMap must be in place prior to applying the deployment for the S2 Studio application.
With the above configuration in place, you are set to start using SingleStore Studio. Below you can see the SingleStore Studio home page with the pre-configured cluster definition.
To review the configuration, the studio.hcl file has a pre-configured profile and the S2 Studio pods connect to the SingleStore Master Aggregator on port 3306 using the DDL service (svc-sas-singlestore-cluster-ddl).
Here we have looked at how to create a container image for the SingleStore Client and Studio. The configuration shown is using HTTP to connect to S2 Studio. In Part 2 I will show how to implement TLS using the SAS Viya secrets.
Finally, it is important to remember that the SingleStore Studio application is not maintained by SAS, and it is not shipped with the SAS Viya with SingleStore order. As such, SAS Technical Support will not provide support for this type of deployment.
Thanks for reading…
Michael Goddard
Find more articles from SAS Global Enablement and Learning here.
Catch the best of SAS Innovate 2025 — anytime, anywhere. Stream powerful keynotes, real-world demos, and game-changing insights from the world’s leading data and AI minds.
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.