On 14 June 2022, the team behind the SAS Viya Monitoring for Kubernetes project (let's call it v4m for short) released version 1.2.0. It's a major release with several changes over v4m 1.1.8, described on the version 1.2.0 release page. Two changes I want to highlight are:
You can read a nice explanation here, but basically ODfE is no longer releasing new versions, OpenSearch is a fork of ElasticSearch which is being actively developed and supported by Amazon, so obviously that's the one the v4m project should be using. And now it is. To an end user the two look and behave similarly and the user experience transition is smooth. The development team have documented Differences between Open Distro for Elasticsearch and OpenSearch in great detail, if it's important for you to know. The team have also gone to considerable effort to make the upgrade process smooth and simple.
However, if it's your job to deploy v4m, I'd like to share a couple of things I learned while updating the code we maintain to deploy SAS Viya Monitoring for Kubernetes (or v4m) in our GEL workshop environments. Perhaps they will save someone a bit of head-scratching or a call to support.
For context, there are several ways to deploy SAS Viya Monitoring for Kubernetes (v4m), and I really only ever deploy it in one of those ways - I maintain a script which, when run in a new set of host machines will clone the v4m project from GitHub, create the customization files we decided to use containing our preferred settings, and call the v4m project's monitoring and logging deployment scripts. If you deploy SAS Viya Monitoring for Kubernetes as part of using the SAS Viya 4 Deployment project, I expect all this should be taken care of for you. If you deploy it into Red Hat OpenShift, I'm not sure how much the changes below apply to you, as I have barely any experience of OpenShift.
Bear in mind the v4m project has two main solutions (or modules, or stacks - the terminology varies a bit); monitoring and logging. This post concentrates on the logging solution.
The table below lists list the logging solution customization files our script creates just before it deploys v4m versions 1.1.8 and 1.2.0. In this table ${USER_DIR} means a shell variable containing the location of our v4m customization files.
v4m 1.1.8 | v4m 1.2.0 | Description |
---|---|---|
${USER_DIR}/logging/user.env | ${USER_DIR}/logging/user.env | We use this to specify the logging namespace name, log retention period, the admin password, the logadm password, the nginx namespace and service name. There are no important changes to this file at this release. |
${USER_DIR}/logging/user-values-osd.yaml | New for v4m 1.2.0, we use this to specify the ingress configuration for OpenSearch Dashboards. | |
${USER_DIR}/logging/user-values-elasticsearch-open.yaml | ${USER_DIR}/logging/user-values-opensearch.yaml | Note: this configuration file's name changed for version 1.2.0. We use this to specify ElasticSearch or OpenSearch's log data PVC size, and ingress configuration. For v4m 1.1.8, this file covered the ingresses for both ElasticSearch and Kibana. For v4m 1.2.0 this file only covers the ingress for OpenSearch. |
${USER_DIR}/logging/user-values-es-exporter.yaml | tee ${USER_DIR}/logging/user-values-es-exporter.yaml | We use this only to specify an image pull policy. Note: despite switching from ElasticSearch to OpenSearch, the two-letter abbreviation in this filename remains 'es'. |
There are several ways in which you might customize your deployment of the logging solution in v4m and the table above just represents how we needed and wanted to customize it for our GEL workshop student hands-on environments in RACE.
SAS Viya Monitoring for Kubernetes uses Helm for deployment.
The yaml files used to configure the logging and monitoring solutions in SAS Viya Monitoring for Kubernetes contain values that override defaults (or provide values where there is no default) in those Helm charts. Therefore we sometimes see the absence of a value in a configuration file as meaning 'use the default'. It's a somewhat complex topic that candidly, I'm not an expert on.
However, I learned that whether or not we are using Transport Layer Security (TLS, using the HTTPS protocol) or not (using the HTTP protocol), HTTPS must be specified for the ElasticSearch and OpenSearch components of the logging solution. It will not work otherwise.
Here's how that worked out for our setup.
In the deployment configuration files we created for v4m 1.1.8, ${USER_DIR}/logging/user-values-elasticsearch-open.yaml covered the ingresses for both ElasticSearch and Kibana. As part of this, we specified that HTTPS should be used for ElasticSearch, and we did not specify either way whether HTTP or HTTPS should be specified for Kibana, which results in Kibana not using TLS, and using HTTP:
elasticsearch: imagePullPolicy: "IfNotPresent" data: persistence: size: ${_logDataSizePerNodeGiB}Gi client: ingress: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: HTTPS enabled: true hosts: - elasticsearch.$(hostname -f) kibana: imagePullPolicy: "IfNotPresent" service: type: ClusterIP nodePort: null ingress: annotations: kubernetes.io/ingress.class: nginx enabled: true hosts: - kibana.$(hostname -f)
Of course, you absolutely can and probably should use TLS for Kibana - but in our low security classroom deployments, we hadn't had any pressing need to do it.
In the deployment configuration files we created for v4m 1.2.0, ${USER_DIR}/logging/user-values-osd.yaml covers the ingresses for OpenSearch Dashboards (equivalent to Kibana), and ${USER_DIR}/logging/user-values-opensearch.yaml now only covers OpenSearch (equivalent to ElasticSearch).
Note: Now that each of these files only configures one product, the 'top level' yaml keys ('elasticsearch' and 'kibana') in the earlier user-values-elasticsearch-open.yaml are no longer necessary, and the keys that used to be beneath them are all at the top level.
user-values-opensearch.yaml was easy enough:
image: pullPolicy: "IfNotPresent" persistence: size: ${_logDataSizePerNodeGiB}Gi ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: HTTPS hosts: - opensearch.$(hostname -f)
However, having seen an HTTPS in the ElasticSearch configuration file for v4m 1.1.8, I initially created a user-values-osd.yaml which similarly specified HTTPS for OpenSearch Dashboards, despite having not otherwise configured TLS for our logging or monitoring stacks:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
hosts:
- host: osd.$(hostname -f)
paths:
- path: /
When I tried browsing to osd.the-relevant-ingress-controller-hostname.sas.com, I got an 502 Bad Gateway error.
The solution turned out to be simple but was not obvious to me at the time - though in hindsight it is a little more obvious - we haven't configured TLS (HTTPS) for our logging or monitoring stacks, so we should access them over HTTP, and our user-values-osd.yaml should tell the ingress to use HTTP, like this:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
hosts:
- host: osd.$(hostname -f)
paths:
- path: /
Thanks as ever to my colleague Greg Smith for spotting my error.
The learning point is to configure your ingress to use always use HTTPS for ElasticSearch or OpenSearch, as they only work using TLS for internal interprocess communication, but to configure external access to your Kibana or OpenSearch dashboards with HTTPS or HTTP according to whether or not you have set up TLS-enabled Monitoring and Logging. And, when you haven't, be aware of it!
The final change to the logging stack at 1.2.0 is that if you used to call logging/bin/deploy_logging_open.sh (where the 'open' referred to OpenDistro for ElasticSearch) from within the viya4-monitoring-kubernetes project directory, you should now call logging/bin/deploy_logging.sh. Deployment on Red Hat OpenShift still uses its own separate deployment scripts, which I can't comment on.
I did not need to make any modifications to the parts of our GEL workshop setup script that deploys the v4m monitoring solution, as we updated from deploying v4m 1.1.8 to v4m 1.2.0. That carried on working just the same as before.
I hope sharing this helps someone else avoid the simple mistakes I made, and thereby saves them a bit of time. Overall, the process was not complicated at all, and I like the new version and its clean new look. See you next time!
Find more articles from SAS Global Enablement and Learning here.
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.