BookmarkSubscribeRSS Feed

SAS SpeedyStore – keeping a watchful eye on your Singlestore cluster

Started ‎07-18-2025 by
Modified ‎07-18-2025 by
Views 255

 

Introduction

 

Having the proper observability tools in place can help make your life as a SAS administrator easier. It helps you to identify potential issues in your environment and act on it to make sure your environment runs smoothly.

 

SAS Viya Monitoring for Kubernetes can provide these insights for the SAS Viya platform. With the release of SAS SpeedyStore, which combines SAS Viya and Singlestore, I thought it would be interesting to have a closer look at what monitoring capabilities Singlestore provides.

 

In this blog post I will look at how to set up the monitoring solution for Singlestore and as a bonus integrate the visualizations provided by Singlestore into the SAS Viya monitoring for Kubernetes framework.

 

That way you can monitor both components from within SAS Viya Monitoring for Kubernetes!

 

Monitoring your Singlestore cluster

 

AlexKoller_0-1752829548388.jpeg

 

The above picture  provides a nice high-level overview of how the monitoring solution works on a Singlestore cluster.

 

On a high level the monitoring solution provided by Singlestore can be broken down into the following components:

 

  • The exporter process which is responsible for collecting metrics in the Singlestore cluster
  • The pipelines that load information from the exporter process into the metrics database.
  • The metrics database that stores all the data collected by the pipelines
  • A set of Grafana dashboards to visualize information stored within the metrics database.

Analyzing the data through the Grafana dashboards can help in identifying trends and act when necessary. Let’s look at which dashboards are included in the solution by default.

 

What dashboards are included?

 

The dashboards provided by Singlestore include:

 

Dashboard

Description

Cluster view

Birds-eye view of the Singlestore cluster

Detailed cluster view by node

Resource utilization per node

Historical workload monitoring

Statistics for parameterized query execution including run count, time spent, and resource utilization

Pipeline performance

Provides insights into the performance of pipelines

Pipeline summary

Birds-eye view of pipelines

Query history

Runtime of queries, how many have failed and how many have succeeded

Resource pool monitoring

Information about queries running in a resource pool

Memory usage

Breakdown of memory usage by node

Disk usage

Shows disks usage within the cluster

 

Review the Singlestore documentation here for a detailed breakdown of these dashboards.

 

Enabling monitoring on your Singlestore cluster

Enabling the monitoring solution involves a couple of steps:

 

  1. Configure the exporter process running on the master aggregator
  2. Setting up the user for the metrics database
  3. Running the monitoring job to create the database and pipelines needed to extract data from Singlestore cluster and load it into the metrics database
  4. Adding the dashboards into the SAS Viya Monitoring for Kubernetes

After step 3 you can deploy Grafana to the Kubernetes cluster and load the dashboards manually as described on the Singlestore website here. I thought it would be interesting to integrate these dashboards into the existing SAS Viya Monitoring for Kubernetes framework. That way you can monitor components of both SAS  Viya and the Singlestore cluster.

 

In the remainder of this blog, I will go through each of the above steps in more detail.

 

Configure the exporter process

 

By default, the exporter process that is responsible for collecting data about the Singlestore cluster is already running on the master aggregator node. Taking a closer look at the master aggregator pod you will notice that there are two containers running: node and exporter.  The exporter container runs the export process which needs to be further configured before it can be used.

 

When you deploy SAS SpeedyStore all communication between different services on the platform is done through HTTPS. The exporter process, however, is not configured to use HTTPS by default and that is something that will have to be addressed.

 

Another item that needs to be addressed in configuring the exporter process is adding a Kubernetes service through which the exporter process can be accessed by other services on the platform.

 

You can directly access the exporter process through the pod hostname. However, a Kubernetes service is an abstraction that provides a stable endpoint over a set of pods and isn’t tied to a specific pod so therefore a Kubernetes service is a better option than using the hostname of a pod when you want to access a specific service.

 

Let’s have a closer look at how we can configure HTTPS and create the Kubernetes service.

 

Adding HTTPS arguments to the exporter process

 

---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: s2operatorpatches-exporter
patch: |-
  - op: add
    path: /spec/template/spec/containers/0/args/-
    value: "--master-exporter-parameters"
  - op: add
    path: /spec/template/spec/containers/0/args/-
    value: "--config.ssl-cert=/etc/memsql/ssl/server-cert.pem --config.ssl-key=/etc/memsql/ssl/server-key.pem --config.use-https --config.user=metrics --no-cluster-collect.info_schema.tables" --no-collect.info_schema.tablestats
target:
  kind: Deployment
  name: sas-singlestore-operator

 

The above patch needs to be applied to the Singlestore operator. This kustomize patch adds two additional arguments to the operator for the exporter process, that tells it to use HTTPS and tells it which certificates to use for HTTPS.

 

  • config.ssl parameters point to a certificate which is available within the Singlestore cluster. These certificates can be used to connect through HTTPS
  • config.user parameter points to a database user on the Singlestore cluster that will be created in a separate step in this blog.
  • no-cluster-collect parameters tell the exporter process not to collect information about specific tables. These are excluded from the exporter process.

Creating a Kubernetes service to access the exporter process

 

namespace=viya
uid=$(kubectl get svc svc-sas-singlestore-cluster-ddl -o jsonpath='{.metadata.ownerReferences}' -n $namespace | jq -r '.[].uid')

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app.kubernetes.io/component: master
    app.kubernetes.io/instance: sas-singlestore-cluster
    app.kubernetes.io/name: memsql-cluster
    custom: label
  name: svc-sas-singlestore-cluster-exporter
  namespace: $namespace
  ownerReferences:
  - apiVersion: memsql.com/v1alpha1
    controller: true
    kind: MemsqlCluster
    name: sas-singlestore-cluster
    uid: $uid         # Update with ownerReferences UID
spec:
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: prometheus
    port: 9104
    protocol: TCP
  selector:
    app.kubernetes.io/instance: sas-singlestore-cluster
    app.kubernetes.io/name: memsql-cluster
    statefulset.kubernetes.io/pod-name: node-sas-singlestore-cluster-master-0
  sessionAffinity: None
  type: ClusterIP
EOF

 

 The above step creates a Kubernetes service through which other services on the Kubernetes platform can interact with the exporter process.  

Now that the exporter process is configured to use HTTPS and a Kubernetes service is created to access the exporter process, let’s move on to the next step in which the metrics database including a user with proper permissions is created.

 

Setting up the user for the metrics database

 

The information collected by the exporter process needs to be stored in a database before it can be visualized in a dashboarding tool like Grafana. This database doesn’t exist by default on the Singlestore cluster and therefore needs to be created.

 

This can be done automatically through what Singlestore call the monitoring job. You will have to provide a Singlestore username and password that can be used to create the database in the monitoring job. This can be root or another user that has the appropriate permissions. Note that this user will be the owner of the database.

 

Because I like to apply the principle of least privilege, I’m creating the database and user upfront and granting this user the necessary permissions. That way I’m not using a super user like root in the monitoring job to interact with the metrics database.

 

The SQL statements below need to be executed by a user that has the proper permissions to create a database and grant permissions to users.

 

CREATE DATABASE if not exists metrics;

CREATE USER 'metrics' IDENTIFIED BY <passowrd>;

GRANT SELECT, CREATE, INSERT, UPDATE, DELETE, EXECUTE, INDEX, ALTER, DROP, CREATE DATABASE, LOCK TABLES, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE ON metrics.* to 'metrics'@'%';

GRANT CREATE PIPELINE, DROP PIPELINE, ALTER PIPELINE, START PIPELINE, SHOW PIPELINE ON metrics.* to 'metrics'@'%';

 

Now that the metrics database and user are created with the proper permissions let’s have look at the setup required for running that monitoring job.

 

Running the monitoring job

 

The monitoring job is a Kubernetes job that deploys a pod to the Kubernetes cluster. Within that pod a container runs that has the tools installed that are needed to populate the metrics database and create the Singlestore pipelines that connect to the exporter process and extract data in json format and load it into the metrics database.

 

For this job to run it requires specific permission on the Kubernetes cluster. The job is given these permissions through a service account on Kubernetes. This service account needs to be created before executing the job. Once created the job can be launched to create the database and pipelines.

 

Let’s have a look at how to create that service account.

 

Create necessary service accounts and rolebindings

 

Use the commands below to create the required service account, role and role bindings.

 

namespace=viya

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tools
  namespace: $namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: $namespace
  name: tools-role
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - namespaces
    verbs:
      - get
      - list
  - apiGroups: [ "" ]
    resources: [ "pods/exec" ]
    verbs: [ "create" ]
  - apiGroups:
      - apps
    resources:
      - statefulsets
    verbs:
      - get
      - list
  - apiGroups:
      - memsql.com
    resources:
      - '*'
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tools
  namespace: $namespace
subjects:
  - kind: ServiceAccount
    name: tools
roleRef:
  kind: Role
  name: tools-role
  apiGroup: rbac.authorization.k8s.io
EOF

 

Create and start the monitoring job

 

Now that the required parts are in place, the monitoring job can be executed by using the command shown here

 

namespace=viya

kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: toolbox-start-monitoring
  namespace: $namespace
spec:
  template:
    spec:
      serviceAccountName: tools
      containers:
      - name: toolbox-start-monitoring
        image: singlestore/tools:alma-v1.11.6-1.18.3-dd55b2ba3f38e7b4baef8230386754464ec08c1a
        imagePullPolicy: IfNotPresent
        command: ["sdb-admin",
                  "start-monitoring-kube",
                  "--user=metrics",
                  "--password=<password>",
                  "--collect-event-traces",
                  "--exporter-host=svc-sas-singlestore-cluster-exporter",
                  "--ssl-ca=/etc/memsql/ssl/server-cert.pem",
                  "--yes"
                  ]
      restartPolicy: Never
  backoffLimit: 2
EOF

 

Inspect the log of the pod. The log of a successful run should look like what is shown here on the screenshot.

 

AlexKoller_3-1752827674266.png

 

Additionally, you can validate if the pipelines that are responsible for pulling the data from the exporter process are running by logging on to the cluster and executing the command as shown on the screenshot.

 

AlexKoller_2-1752826270891.png

 

If you see these three pipelines running, then you are good to go!

 

Adding the dashboards into SAS Viya Monitoring for Kubernetes

 

The exporter process is configured. A database called metrics has been created and the pipelines to load the data are up and running. Almost all components are in place except for the dashboards that are used to visualize this information.

 

Singlestore provides a set of dashboards to visualize metrics. These dashboards are available here.

 

SAS Viya Monitoring for Kubernetes is available on github and is a framework that contains a set of components that allow you to monitor your SAS Viya deployment. Amongst those components are a set of dashboards that allow you to monitor specific components of the platform like the Cloud Analytics Server.

 

Wouldn’t it be great if the Singlestore dashboards could be added to this framework to get a complete view of SAS Viya and the Singlestore cluster which are both part of SAS SpeedyStore.

 

The framework allows you to add user provided Grafana dashboards. Let’s find out how we can use this to our advantage by adding the dashboards that are provided by Singlestore.

 

Cloning github repository

 

The first step is to clone the github repository. The branch option here locks the code to a specific version of SAS Viya monitoring for Kubernetes.

 

deploy_dir=<directory>
cd $deploy_dir
git clone https://github.com/sassoftware/viya4-monitoring-kubernetes.git --branch 1.2.39

 

 Setting the necessary environment variables

 

Once the code has been cloned the next step is to configure the deployment by adding environment variables. These environment variables control specific aspects of the deployment.

 

mkdir -p $deploy_dir/viya4-monitoring-user
export USER_DIR=$deploy_dir/viya4-monitoring-user
cat << EOF> $USER_DIR/user.env
# Enables tolerations and pod affinity to enable the monitoring
# components to participate in the SAS Viya workload node placement strategy
MON_NODE_PLACEMENT_ENABLE=true

# Namespace of NGINX ingress controller (if applicable)
NGINX_NS=ingress-nginx

# Name of NGINX ingress controller (if applicable)
NGINX_SVCNAME=ingress-nginx-controller

AUTOGENERATE_INGRESS=true
BASE_DOMAIN=<fqdn>
ROUTING=path
INGRESS_CERT=<ingress-certificate>
INGRESS_KEY=<ingress-certificate-key>
EOF

 

Let’s review them in a bit more detail

 

  • MON_NODE_PLACEMENT_ENABLE: if using SAS Viya workload placement, this option makes sure that the necessary tolerations are added to the deployment. Set this to false if you are not using workload placement
  • AUTO_GENERATE_INGRESS: this feature will automatically generate all the required ingress rules based on input provided through environment variables when set to TRUE. See the documentation for more information on this feature
  • BASE_DOMAIN: this is the host domain that is used for constructing URLs to web applications on this cluster.
  • ROUTING: you can either set this to path or to host. Setting it to path will generate ingress rules to access applications on this cluster using path-based ingress rules.
  • INGRESS CERT: provide a path to the ingress certificate including the filename of the certificate. This is used to create a secret that will contain your certificate
  • INGRESS KEY: provide the path to the ingress certificate key including the filename of the key. This is used to create a secret that will contain your certificate and key

Now that the environment variables are set, let’s look at getting our hands on the Singlestore Grafana dashboards.

 

Download Singlestore dashboards

 

As mentioned earlier in the blog, Singlestore provides a set of dashboards which can be downloaded from their website. To deploy these dashboards as a part of the SAS Viya monitoring for Kubernetes framework, the dashboards need to be put in a specific location. The process is described in detail here.

 

rm -rf $USER_DIR/monitoring/dashboards/tmp
mkdir -p $USER_DIR/monitoring/dashboards/tmp 
curl https://assets.contentstack.io/v3/assets/bltac01ee6daa3a1e14/blta06deeec3a96070e/k8s_dashboards_85_and_later.zip -o $USER_DIR/monitoring/dashboards/tmp/k8s_dashboards_85_and_later.zip
unzip $USER_DIR/monitoring/dashboards/tmp/k8s_dashboards_85_and_later.zip -d $USER_DIR/monitoring/dashboards/tmp
mv $USER_DIR/monitoring/dashboards/tmp/k8s_dashboards_*/*.json $USER_DIR/monitoring/dashboards/tmp

 

 The above code creates the required location, downloads the dashboards, unzips the dashboards and moves the dashboards to a temporary location as the dashboards need to be modified before they can be deployed.

 

Preparing dashboards for deployment

 

Once the dashboards are unpacked from the zip file you will notice that there are spaces within the filenames of the dashboards and that they contain uppercase letters. The framework creates Kubernetes configmaps and loads the content of these dashboards into these configmaps. It uses the name of the files as the name of the configmap.

 

There are specific rules to characters that are used as names for a Kubernetes configmap. They are not allowed to contain spaces and uppercase characters. Therefore, these file names need to be modified using the below code.

 

As a bonus I’m adding a Singlestore tag to the dashboards and modifying the title of these dashboards to include the prefix Singlestore. That makes it easier to find these when opening them in Grafana.

 

cd $USER_DIR/monitoring/dashboards/tmp

#remove spaces
for file in *.json
do
  mv -- "$file" "${file// /}"
done

#make lowercase
for f in *; do 
  if [ -f "$f" ]; then # Check if it's a regular file
    mv "$f" "${f,,}"
  fi
done

#add tag to dashboards
for file in *.json
do
  cat $file | jq '.tags = ["singlestore"]' > $USER_DIR/monitoring/dashboards/tmp/tmp_$file
done

for file in tmp*.json
do
  name=$(echo $file | cut -f 2 -d '_')
  cat $file | jq '.title = "Singlestore \(.title)"' > $USER_DIR/monitoring/dashboards/$name
done

#clean up
rm -rf $USER_DIR/monitoring/dashboards/tmp

 

 Deploy Grafana

 

Everything is now ready, and Grafana can be deployed into the Kubernetes cluster. This will also include all the Singlestore dashboards that were added in the previous step.

 

Pretty cool huh!

 

cd $deploy_dir/viya4-monitoring-kubernetes
monitoring/bin/deploy_monitoring_cluster.sh

monitoring/bin/deploy_monitoring_viya.sh

 

 The next step is to define the datasource for our Singlestore dashboards. To be able to define these datasources the SAS Viya Monitoring for Kubernetes framework needs to be deployed first as there currently is no way to provide additional data sources.

 

Add additional data source to Grafana

 

Almost there!  For the Singlestore dashboards to work we need to define a datasource. In this case we need to define a mysql datasource called monitoring. Review the information below and adjust accordingly to your own situation.  

 

Make sure that the url, database, user and password match your specific situation.

cat << EOF > /tmp/grafana-datasource-mysql.yaml
apiVersion: 1
deleteDatasources:
- name: monitoring
prune: true
datasources:
  - name: monitoring
    type: mysql
    url: svc-sas-singlestore-cluster-ddl.viya.svc.cluster.local:3306
    user: metrics
    jsonData:
      database: metrics
      maxOpenConns: 100
      maxIdleConns: 100
      maxIdleConnsAuto: true
      connMaxLifetime: 14400
    secureJsonData:
      password: <password of database user>
EOF

kubectl create cm -n monitoring grafana-datasource-mysql --from-file "/tmp/grafana-datasource-mysql.yaml"
kubectl label cm -n monitoring grafana-datasource-mysql grafana_datasource=1 sas.com/monitoring-base=kube-viya-monitoring

 

Restart Grafana

 

The datasource has been defined. But it’s not yet available to Grafana. The only thing left to do is to restart Grafana using the commands below.

 

kubectl delete pods -n monitoring -l "app.kubernetes.io/instance=v4m-prometheus-operator" -l "app.kubernetes.io/name=grafana"
kubectl -n monitoring wait pods --selector "app.kubernetes.io/name=grafana" --for condition=Ready --timeout=2m

 

Conclusion

 

Identifying potential issues by analyzing trends and taking actions based on these insights make the life of a SAS administrator easier. It will help to operate your environment more efficiently.

 

SAS Viya Monitoring for Kubernetes provides the observability tools to monitor the SAS Viya platform.

 

With the help of this blog, you will be able to extend that framework and add additional Singlestore dashboards to that existing framework. This will help in realizing that complete view of both SAS Viya and the Singlestore cluster when deploying SAS SpeedyStore.

 

References

 

Monitoring your Kubernetes Cluster

SAS Viya Monitoring for Kubernetes - GIthub

SAS Viya Monitoring for Kubernetes - SAS Help Center

Contributors
Version history
Last update:
‎07-18-2025 05:07 AM
Updated by:

hackathon24-white-horiz.png

2025 SAS Hackathon: There is still time!

Good news: We've extended SAS Hackathon registration until Sept. 12, so you still have time to be part of our biggest event yet – our five-year anniversary!

Register Now

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags