BookmarkSubscribeRSS Feed

Planning the URL names of your SAS Viya deployment – part 2

Started ‎04-12-2021 by
Modified ‎04-12-2021 by
Views 5,584

This article is the direct follow-up of this article where I explained how to choose the {{ NAME-OF-INGRESS-HOST }} value in your kustomization.yaml as part of your SAS Viya (2020.1 and later) deployment.

 

I also showed how to deal with the need to access an additional SAS Viya deployment running in the same Kubernetes cluster.

 

Now, because you may want to deploy and expose the monitoring and logging tools and other TCP services such as CAS and SAS/CONNECT, we will explain how to do it and what is required (here is a clue, it starts with "D" 😊).

 

If you haven’t already read it, please start with the first article, before reading the following.

 

How to access and expose the monitoring tools ?

Here is a little reminder about the way the Kubernetes Ingresses are working.

 

As explained in this article, the ingress rules can be based on paths and/or hostnames and the selected route will be determined by the content of the HTTP url, for example:

 

 

rp_1_ingressfulestypes.png

 

That's what's cool about Ingresses – they understand the URL structure and you can build rules to address various scenarios. So when you deploy the SAS Viya Monitoring for Kubernetes tools, by default, it will create Ingress rules using what is called "Name based virtual hosts" as illustrated below in the official Kubernetes documentation.

 

rp_2_ingressK8Sdiagram.png

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

The Ingress rules tells the backing load balancer to route requests based on the "Host header".

 

What is means for our deployment is that : when you deploy the SAS Viya Monitoring tools, several ingresses with different "hostnames based" URLs will be created : kibana.example.com, grafana.example.com, etc...

 

Here is how it looks in Lens :

 

rp_3_lens_figure2.png

 

 

These URLs redirects you to the very same External IP address as the one used for the SAS Viya Web applications.

But of course, if you want to be able to serve these monitoring applications (Grafana, Prometheus, Kibana) outside of the cluster, you will need yet other DNS aliases.

 

We can add this new external access routes to our diagram :

 

Figure 3 – Access to logging, monitoring tools and SAS Web Apps in 2 namespacesFigure 3 – Access to logging, monitoring tools and SAS Web Apps in 2 namespaces

 

Note:  if you are automating your SAS Viya deployment with the SAS Viya Deployment GitHub tool, you can specify, in your vars.yml file, the base domain where you will have your DNS name (with the V4M_BASE_DOMAIN variable).

 

 

Wildcard DNS alias

 

As we’ve seen our diagram is getting busy…we may have many DNS entries to add for our deployment and the associated tools. And the customer IT team might get tired to create a new DNS each time for you.

 

An option, in such case, is to create a wildcard DNS record to support as many Viya environments as they want in the same Kubernetes cluster.

 

For example, it could be something like “*.viya.giga.com” that points to the Ingress Controller external IP address.

 

So, all the URLs, no matter how they start (dev.viya.giga.com, lab.viya.giga.com, grafana-prod.viya.giga.com, kibanadev.viya.giga.com, etc…) would send the requests to the same external IP. But then, it will be up to the Ingress Controller to route the requests appropriately to the proper namespace and services (just look at the diagram again if you’re lost 😊).

 

Here is an example, where we use a corporate DNS service GUI to simply add a "wildcard" DNS alias pointing to the Azure DNS name.

 

rp_5_SASDNSNAME.png

 

Note : If the customer wants to use his own TLS certificates, using wildcard DNS aliases also means that the customers will have to provide wildcard certificates which can be used with multiple sub-domains of a domain. Compared with conventional certificates, a wildcard certificate can be cheaper and more convenient than a certificate for each sub-domain, but the customer might not have planned to have them in place for your deployment.

 

How to access and expose the CAS and CONNECT ?

Until now, we only talked about accessing Web application through HTTP(S).

 

But how does it work if :

 

  • I want to allow my data scientists to connect to the CAS binary port from their “SWAT” python clients ? Which hostname should they use to connect ?
  • I have SAS 9 users who wants to use SAS/CONNECT to run the SAS Viya (2020.1 and later) environment ?

 

According to the Nginx documentation it is possible to configure the Ingress Controller to route the TCP requests, but the usual and simplest way to expose a TCP service on the outside is to create a new load-balancer service.

 

For example, you can create this new "load-balancer" service for CAS by applying the following manifest:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: default
  name: sas-cas-server-default-lb
spec:
  ports:
  - name: cas-cal
    port: 5570
    protocol: TCP
    targetPort: 5570
  - name: cas-gc
    port: 5571
    protocol: TCP
    targetPort: 5571
  selector:
    casoperator.sas.com/controller-active: "1"
    casoperator.sas.com/node-type: controller
    casoperator.sas.com/server: default
  type: "LoadBalancer"
  loadBalancerSourceRanges:
  - 123.456.789.10/16 #Head-quarters
  - 234.567.56.224/27 #UK Office
---

 

When the manifest is applied, then an additional load-balancer service is created, another external IP address is dynamically provisioned for CAS by the Cloud provider, and the IP address is then attached to the Cloud Load-balancer.

 

At the physical level, the new external IP will send request on port 5570 to any of the nodes on a specific NodePort to reach the CAS Server service that will then route the request to the active CAS controller pod (thanks to the "selector" section of the service manifest).

 

You can also notice with the "loadBalancerSourceRange" specification that we can filter the source client IP addresses to only allow a specific range of IP address to contact the CAS service.

 

If you use the SAS Viya Deployment GitHub tool to automate the SAS Viya deployment and set the ansible variable V4_CFG_CAS_ENABLE_LOADBALANCER to "true", the same technique will be applied with the creation of this additional "loadbalancer" service (addition of a new Loadbalancer service).

 

Here is the result in Lens for CAS :

 

rp_7_lens_figure3.png

 

It works exactly the same way for SAS/CONNECT. You will have another LoadBalancer service, which generate another ingress external IP address.

 

You can either create the service manifest or set the ansible variable V4_CFG_CONNECT_ENABLE_LOADBALANCER to true if you used the SAS Viya Deployment GitHub tool.

 

So finally, you end up with 3 LoadBalancer services and 3 associated IP addresses attached to the Cloud Load-balancer: for NGINX (web applications), CAS and SAS/CONNECT.

 

Here is a view of the Loadbalancer services from the Azure Portal:

 

rp_8_loadbalancer-view.png

 

So, in both cases we get a new external IP address that we can use to reach our TCP service (CAS and CONNECT). But, again, who wants to use an IP address to contact a remote service ?

 

So yes, we need yet another DNS for these ones.

 

Finally, if we add the CAS and CONNECT access into our diagram and now it REALLY gets busy 😊.

 

It will look like this :

 

Figure 4 – Access to CAS, CONNECT, logging, monitoring tools and SAS Web Apps in 2 namespacesFigure 4 – Access to CAS, CONNECT, logging, monitoring tools and SAS Web Apps in 2 namespaces

 

 

Conclusion


These network topics are complex and new. I hope this article explanations and associated diagrams helped a little bit to understand them in the context of a SAS Viya deployment.

 

The key “take away” here is that you need to think about the URLs that the end-user will use to connect to the SAS Viya  applications and the associated DNS requirements.

 

If you are preparing a deployment, make sure have a meeting planned with your customer where you can discuss the DNS requirements based on the number and types of external access that are needed for your SAS Viya implementation.

 

While in the BareOS world, it was possible to access the SAS application simply with the server name, it is not something that will work any longer in the Kubernetes world (where your web applications  are running in Pods, accessed through services, themselves accessed through ingresses and/or external load-balancers).

 

Thanks for reading !

 

Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
‎04-12-2021 05:52 AM
Updated by:
Contributors

SAS Innovate 2025: Call for Content

Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!

Submit your idea!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags