BookmarkSubscribeRSS Feed

The Azure CNI Overlay and SAS Viya

Started a week ago by
Modified a week ago by
Views 124

In April 2023, Microsoft announced the general availability of the Azure CNI Overlay in Azure Kubernetes Service as a layer of improvement over the standard Azure CNI. Then in January 2024, SAS took steps to ensure support for the Azure CNI Overlay with SAS Viya and went so far as to backport that support to the LTS-2023.10 release as well. In general, this technology underpins the infrastructure at a level lower than SAS documents in Viya's requirements. In other words, it's a decision that a site can make without special guidance from SAS beyond basics like ensuring a sufficient range of IP addresses. This means that SAS doesn't offer any hard requirements or direction on implementing the Azure CNI Overlay in its documentation.

 

Since networking expertise is often a factor critical to successful deployment, let's take a quick look at what the Azure CNI Overlay offers and what that means to Viya. We'll also touch on configuring your Azure Kubernetes Service to implement the Azure CNI Overlay as driven by the SAS Infrastructure-as-Code project, viya4-iac-azure.

 

Kubernetes networking

 

Without getting carried away into a lot of detail, suffice to say that Kubernetes provides its own full-fledged virtualized networking functionality and management model. It essentially operates its own DNS for services and pods. The idea is that each pod in a cluster gets its own unique cluster-wide IP address. This ensures pods can communicate with all other pods running on any node without needing network address translation. Services are networking interfaces for reaching pods. The clusterIP type of service is used for communication inside the cluster while nodePort and loadBalancer services allow for communication from outside the cluster.

 

It's when we start touching the world outside the cluster that we want to look at more here. There are choices available when implementing a network model offered by the Container Network Interface.

 

Container Network Interface

 

The Container Network Interface (CNI) is often referred to in Kubernetes deployments, but it's not a Kubernetes-specific technology. It's vendor-neutral and can be used to extend networking capability in general for any containerized workload on any network. The CNI is used as a framework for implementing desired CNI plug-in(s) to define a basic flow and configuration format for network operations.

 

Microsoft offers its Azure CNI as an option for use with Azure Kubernetes Service. Before we dig into that, let's look at the Kubernetes default networking components.

 

Kubenet

 

Kubenet is the default networking model (or plugin) used by Kubernetes. And it's the default selected by managed cloud offerings like Azure Kubernetes Service as well. With kubenet in place, then an Azure virtual network (VNET) and subnet are created for you with AKS such that the nodes get assigned an IP address from the VNET. Pods will receive an IP address from a different address space than the VNET. Network address translation (NAT) is setup so pods can reach resources on the Azure VNET (like external databases). This means that the pod's source IP address is translated to the node's IP address for communication outside the cluster. This approach helps reduce the number of IP addresses needed in the VNET.

 

This is effectively a good start. But it's also pretty basic. For example, kubenet doesn't enforce any network policies - meaning that pods are reachable by any source - deferring that responsibility to a CNI plug-in. Also, network address translation adds another hop of latency into the mix which might not be desirable for high-performance applications.

 

Azure CNI

 

The Azure CNI is an optional plugin we can employ. Indeed, some SAS offerings, like SAS Viya with SingleStore require the use of Azure CNI in AKS due to the underlying demands of the SingleStore database.

 

With the Azure CNI, the IP addresses for both nodes and pods are assigned from the VNET. By default, each node is configured with its primary IP address plus an additional 30 IP addresses for pods scheduled to run on the node. This means the AKS cluster is limited to the size of the IP address range available so additional planning is required to ensure the VNET and its subnet can accommodate the number of nodes and number of pods total that the cluster might scale up to run. With large deployments, this kind of network planning is likely needed to ensure throughput rates anyway.

 

Removing the NAT layer that kubenet utilizes means that Azure CNI performs network operations faster and more efficiently. This also helps with transparency since traffic from a pod can be identified by the pod's own VNET IP address (not obscured behind NAT). Another nice feature of Azure CNI is that it supports the Azure network policies so you can define rules about the kind of traffic that can reach the pods in your cluster.

 

There are a couple of significant challenges, however. Properly planning to prevent the exhaustion of IP address space is required. And the use of Azure NetApp Files for storage can be problematic in some cases since it only allows up to 1,000 IP addresses in a VNET. Some Kubernetes deployments might require more than that.

 

Azure CNI Overlay

 

The Azure CNI Overlay is intended to address the shortcomings of the standard Azure CNI while keeping most of its benefits, too. Its main contribution is to reduce the size of the IP address space required in the VNET.

 

Like kubenet, the cluster nodes are assigned IP addresses from the VNET, but the pods get theirs from a different private range. This means there's a separate routing domain in the Azure networking stack to allow for direct communication between pods. NAT is again used to translate the pod's private IP address to the node's VNET address when communicating outside the cluster.

 

On the other hand, Azure network policies are still supported, so a proper level of control over externally-sourced communication to pods in the cluster can be managed.

 

Basically, Azure CNI Overlay effectively eliminates the primary cause of IP address exhaustion in the VNET and subnet for large AKS implementations while offering functionality beyond kubenet. This is a very good thing for clusters running numerous projects and supporting large numbers of users or other activities in AKS.

 

CNI Advice for SAS Viya

 

Officially, SAS doesn't offer a recommendation on the networking model or approach chosen for AKS. We simply require fundamental attributes inline with Viya's operations, like adequate IP address space for the number of pods and nodes as well as latency that's low enough to keep analytics and interface performance in line with expectations. Ultimately, the goal is to ensure sufficient I/O throughput and performance. To that end, you can use kubenet, Azure CNI, or Azure CNI Overlay as best suits the needs of your site. You could also choose to go with another CNI plugin as well assuming it supports the underlying fundamentals that Viya relies on.

 

Another item to understand is that SAS doesn't exhaustively test different networking models in Azure, other managed cloud-providers, or on-premise installations of upstream, open-source Kubernetes. So, be certain to refer to the documented requirements and capacity expectations when planning a SAS Viya implementation at your site and ensure those can be met by your chosen approach.

 

Ideally, you want to choose your CNI approach before standing up the cluster in AKS. Once Viya is deployed to the cluster, it cannot support changing the underlying CNI. To do that would require a fresh re-deployment of the software (and associated backup/migration of production content).

 

Deploying a CNI for SAS Viya

 

SAS Viya releases LTS-2023.10 and later support the use of Azure CNI Overlay in AKS in addition to kubenet and Azure CNI.

 

The site is responsible for provisioning the infrastructure, including the Kubernetes control plane and CNI, for running Viya software. Meeting that objective can be daunting for folks new to cloud-managed or on-premise Kubernetes deployments. So, SAS offers an open-source project hosted in GitHub to help sites get started. In particular, the SAS Infrastructure-as-Code project, viya4-iac-azure, can be referenced as an example for standing up AKS.

 

And beginning with release 9.0.0 of the IAC project for Azure, it can be configured to deploy any of the three CNI discussed here. The IAC defaults to using kubenet and also offers configuration to deploy Azure CNI with either Calico or Azure as network policy manager as well as the option to choose Azure CNI Overlay.

 

The particular settings you'll refer to are defined as Terraform directives in the IAC project's CONFIG_VARS file. To get Azure CNI Overlay, specify appropriate values for the aks_network_plugin, aks_network_policy, and aks_network_plugin_mode directives.

 

Try it for yourself

 

In SAS Viya: Deployment on Azure Kubernetes Service workshop, you're provided with a virtualized environment where you can stand up AKS and deploy SAS Viya with helpful explanations every step along the way.

 

As of late May, 2024, you'll need to make some minor modifications to the hands-on instructions to get suitable assets for including the Azure CNI Overlay, including:

 

  • Get Terraform version 1.8.0 (or later)
    The workshop currently pins to Terraform version 1.6.6.1 (or older) which doesn't support the Azure CNI Overlay:

01_RC_aCNIo-1a.png

 

Modify this command to install "terraform-1.8.0" or later.

 

  • Obtain Terraform templates from release 9.0.0 (or later) of the viya4-iac-azure project
    The workshop currently pins to IAC release 8.5.0 (or older):

02_RC_aCNIo-2a.png

 

Change the IAC_AZURE_TAG variable definition to specify release 9.0.0 or later.

 

Making these two simple edits to the exercise will ready your environment to deploy the Azure CNI Overlay. So now, let's specify that...

 

In the TFVARS file:

 

03_RC_aCNIo-3a.png

 

Note the destination of the TFVARS template file (in "${iac_dir}"). Once that template file has been copied to its final location in your system, instructions follow to perform additional changes to make it work for your deployment. We just need to add three lines along the way:

 

## Networking
aks_network_plugin      = "azure"
aks_network_policy      = "azure"            # or "calico" if preferred
aks_network_plugin_mode = "overlay"

 

Those lines can be inserted anywhere in the TFVARS file.

 

With these changes in place, then proceed normally following the remaining instructions in the exercise to stand up AKS followed by deploying SAS Viya. The IAC (via Terraform) will configure AKS to use the Azure CNI Overlay as the networking model. And Viya doesn't need any special configuration - it simply relies on Kubernetes to handle the networking however it's configured.

 

Coda

 

SAS Viya has very small exposure of service endpoints outside the cluster (usually just for NGINX ingress controller and optionally for the CAS binary port), but relies on numerous pods with intra-cluster communication. Combine that with replication for high-availability of service as well as scaling up to meet load demands in production environments and the number of necessary IP addresses could surpass the available range in the virtual network.

 

Kubenet and Azure CNI Overlay guard against IP address exhaustion in the VNET by implementing a private IP address range with network address translation when communicating with services outside the cluster. This is especially helpful at sites with Azure NetApp Files utilized for persistent storage.

 

The Azure CNI Overlay is a useful extension of the standard Azure CNI model for networking in Azure Kubernetes Service. It blends the benefits of a small IP address range in the VNET (as seen with kubenet) with more powerful functionality, like network policy enforcement, offered by the standard Azure CNI.

 

Given the wide range of potential deployment and usage scenarios with Viya, it's great that sites can choose from different networking models in AKS to get the job done.

 

 

Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
a week ago
Updated by:
Contributors

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started