BookmarkSubscribeRSS Feed

SAS Viya on AWS - using an ALB in place of a NLB

Started ‎02-25-2025 by
Modified ‎02-25-2025 by
Views 721

For SAS Viya on AWS, a Network Load Balancer (NLB) is the preferred choice due to its efficiency, ease of configuration, and to ensure full compatibility with some SAS Viya features, solutions, or products that may not use http(s) for client communication.   However, there are certain customer scenarios where the Application Load Balancer (ALB) is desired or required.  Typical reasons for this include:

  • Web Application Firewall front-end - only compatible with ALB
  • ALB is the standard or approved deployment method for http applications
    • NLB is perhaps viewed as an anti-pattern and simply not allowed
  • Pre-established load balancer logging and monitoring requires an ALB

In this article, I will discuss a working configuration option for completely replacing the AWS NLB with an ALB in situations where it becomes necessary.

 

High-Level Overview

When a NLB is used, the resource can be provisioned by simply adding service.beta.kubernetes.io/aws-load-balancer-type: nlb annotation to the ingress-nginx configuration (controller.service.annotations).  When this annotation is specified, the kubernetes/aws-cloud-controller is used to manage the lifecycle of the NLB.  However, this controller is incapable of creating or updating an ALB resource, only NLB or classic load balancers (deprecated) are supported.  As a result, we will need to modify our ingress-nginx configuration and utilize a more feature-rich load balancer controller to achieve our desired configuration.

 

At a high level, a working solution is to deploy ingress-nginx as a nodePort (controller.service.type: NodePort) vs a type: LoadBalanacer and provision an ALB with the proper http/https listeners and forwarders.  Finally we can bind the ALB target group to the ingress-nginx service using a TargetGroupBinding custom resource, which is available via the aws-load-balancer-controller (a more feature rich AWS load balancer controller).

 

In my example, I will share the configuration and deployment of ingress-nginx using helm, using the aws cli to create/configure some of the resources, and kubectl to create our TargetGroupBinding.  I'll also share how to ensure TLS is enabled end-to-end.

 

The specific steps necessary are as follows:

  1. helm - install ingress-nginx (exposed as a nodeport)
  2. helm – install aws-load-balancer-controller
  3. aws cli - create a Security Group for the ALB (add inbound 80/443 rules for each access CIDR)
  4. aws cli -Update the worker node security group to allow ingress traffic from the ALB security group
    • Note: failing to do this step will prevent traffic from the ALB to the nodes
  5. aws cli - Create and configure the ALB
    1. Attach ALB Security Group
    2. Increase timeout to 300 sec (SAS recommendation)
    3. Enable http preserve host header (so the request gets properly forwarded to nginx)
    4. Create targetgroups for http and https
    5. Create listeners:

https – forwards to https targetgroup (requires an associated ingress cert) and update health check path to /healthz

http – permanent redirect (301) to https

  1. kubectl - Bind ingress-nginx service to https target group via TargetGroupBinding custom resource.
  2. Deploy SAS Viya - with full-stack TLS enabled 

Once everything is configured correctly, the ALB resource map should look similar to this (note: this example shows ingress-nginx controller deployed as a single pod.  For higher availability see the section below for configuring ingress-nginx.)

alb-resource-map.png


Configure ingress-nginx

When using a NLB, we'd typically configure ingress-nginx controller service as a type: LoadBalancer, however that won't work when an ALB is used.  We'll instead configure the controller service as a type: NodePort.  Your ingress-nginx helm values.yaml should look something this:

controller:
  service:
    externalTrafficPolicy: Local
    sessionAffinity: None
    type: NodePort
  config:
    allow-snippet-annotations: "true"
    use-forwarded-headers: "false"
    hsts-max-age: "63072000"
    hide-headers: Server,X-Powered-By
    large-client-header-buffers: 4 32k
    annotation-value-word-blocklist: load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},\
  tcp: {}
  udp: {}
  lifecycle:
    preStop:
      exec:
        command: [/bin/sh, -c, sleep 5; /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf -s quit; while pgrep -x nginx; do sleep 1; done]
  terminationGracePeriodSeconds: 600

Optionally, if you need higher availability of ingress-nginx controller, this can be achieved a multitude of ways:

  • Deploy the controller as a daemonset
  • Increase the replica count
    • define anti-affinity rules to keep replicas from scheduling to same node

Install aws-load-balancer-controller

There are multiple ways to accomplish this step, here is an install guide directly from AWS: Install AWS Load Balancer Controller with Helm

 

Create a Security Group for the ALB 

There are multiple ways to accomplish this step, to many to list.  Ultimately, create the security group and corresponding rules in alignment with your infrastructure provisioning process.  The important thing is, this rule will be used for ingress traffic at the ALB,  so ensure that your ingress rules allow http(s) (80/443) from the source CIDR ranges that you want to allow inbound.  This SG will be attached to the ALB when that is created in a later step.  Here is an example of creating the ALB SG using aws cli (note: I used the tfstate, which is an output of the viya4-iac-aws project, to grab allowlist CIDR ranges):

# create alb-sg
aws ec2 create-security-group \
  --group-name ${PREFIX}-alb-sg \
  --description "security group for $PREFIX-alb" \
  --vpc-id $VPC_ID \
  --output text \
  --query 'SecurityGroups[].[GroupId]'

# create ingress-rules within ALB SG for each CIDR in allowlist from IAC tfstate
for i in $(jq -r '.resources[] | select(.type == "aws_security_group_rule") | select(.name=="vms") | .instances[0].attributes.cidr_blocks | length' infrastructure/terraform.tfstate); do
  pos=$i-1
  cidr=$(jq -r '.resources[] | select(.type == "aws_security_group_rule") | select(.name=="vms") | .instances[0].attributes.cidr_blocks['$pos']' infrastructure/terraform.tfstate)

  # create ingress rules
  aws ec2 authorize-security-group-ingress \
    --group-id $ALB_SG_ID \
    --protocol tcp \
    --port 80 \
    --cidr "$cidr" \
    --output text \
    --no-cli-pager

  aws ec2 authorize-security-group-ingress\
    --group-id $ALB_SG_ID \
    --protocol tcp \
    --port 443 \
    --cidr "$cidr" \
    --output text \
    --no-cli-pager
done

 

Update the worker nodes Security Group - allow ingress from source-group ALB SG 

In most cases the EKS worker nodes will have a security group attached, in this case add an ingress rule to allow traffic sourced from the ALB security group.  As a reminder, if your worker nodes have a security group attached, failing to do this step will prevent traffic from the ALB to the nodes.  Here is an example of adding the ingress rule using aws cli:

aws ec2 authorize-security-group-ingress \
    --group-id $WORKERS_SG_ID \
    --protocol all \
    --source-group $ALB_SG_ID

 

Create and configure the ALB

Like previous steps, there are a few different ways this step can be accomplished. Ultimately, create and configure the ALB in alignment with your infrastructure provisioning process.  The important thing here is that the following items get properly configured within the ALB:

 

  1. Create the Application Load Balancer
    • Associate the ALB with your "ingress" subnet(s) - the subnet(s) where inbound traffic is expected
      • In a typical public facing deployment (ie: the SAS Viya VPC has a internet gateway directly attached), this would be your "public" subnet(s)
      • In a typical private deployment, this would be your subnet(s) that are directly connected to things like a transit gateway
    • Attach ALB Security Group that was created in a previous step
  2. Increase timeout to 300 sec (SAS recommendation)
  3. Enable http preserve host header
    • This ensures that ingress-nginx gets properly forwarded requests
  4. Create targetgroup for https
  5. Create listeners:
    • https – forwards to https targetgroup and update health check path to /healthz
      • note: https listeners require a SSL certificate.  Unlike a NLB, https traffic does terminate at the ALB.  If you are trying to also achieve TLS in transit between the ALB and the SAS Viya ingresses, it is perfectly acceptable to use the same certificate for both the ALB and the ingresses.  Also note that the ALB does not validate backend TLS certificates.
    • http – permanent redirect (301) to https

Here is an example how these steps can be accomplished using the aws cli:

# create ALB
aws elbv2 create-load-balancer \
  --name $ALB_NAME \
  --type application \
  --subnets "$PUB_SUB_0" "$PUB_SUB_1" \
  --security-groups "$ALB_SG_ID"

# update idle timeout to 300 sec - SAS recommendation
aws elbv2 modify-load-balancer-attributes \
  --region $AWS_REGION \
  --load-balancer-arn $ALB_ARN \
  --attributes Key=idle_timeout.timeout_seconds,Value=300

# turn preserve host header on
aws elbv2 modify-load-balancer-attributes \
  --region $AWS_REGION \
  --load-balancer-arn $ALB_ARN \
  --attributes Key=routing.http.preserve_host_header.enabled,Value=true

# create target groups and listeners
aws elbv2 create-target-group \
  --name $HTTPS_TG_NAME \
  --port 443 \
  --protocol HTTPS \
  --target-type instance \
  --vpc-id $VPC_ID 

# update health check path to /healthz and use https
aws elbv2 modify-target-group \
  --target-group-arn $HTTPS_TG_ARN \
  --health-check-path "/healthz" \
  --health-check-protocol HTTPS

# create https listener
# must supply SSL certificate for https listeners...
aws elbv2 create-listener \
  --load-balancer-arn $ALB_ARN \
  --protocol HTTPS \
  --port 443 \
  --certificates CertificateArn=$HTTPS_SSL_CERT \
  --default-actions \
   '[{"Type": "forward", 
   "ForwardConfig": 
     {"TargetGroups": [
     {"TargetGroupArn": "'${HTTPS_TG_ARN}'", "Weight": 100}
     ]
   }
   }]'

# create permanent http redirect rule at the ALB
aws elbv2 create-listener \
  --load-balancer-arn $ALB_ARN \
  --protocol HTTP \
  --port 80 \
  --default-actions \
   '[{"Type": "redirect", 
   "RedirectConfig": 
     {"Protocol": "HTTPS",
     "Port" : "443",
     "StatusCode": "HTTP_301"
     }
   }]'

 

Create the TargetGroupBinding custom resource

Assuming you've followed along and everything has been configured up to this point, it is time to connect the ALB and the ingress-nginx controller together.  We do this using a custom resource called a TargetGroupBinding which is supported by the aws-load-balancer-controller.  Here is an example of how to configure the TargetGroupBinding:

# contents of target-group-binding.yaml

apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: ingress-nginx-target-group-binding
  namespace: ingress-nginx
spec:
  targetType: instance
  serviceRef:
    name: ingress-nginx-controller
    port: 443
  targetGroupARN: $HTTPS_TG

# apply to the cluster
kubectl apply -f target-group-binding.yaml

 

To confirm everything is configured correctly up to this point, you can review the ALB resource map from the AWS console.  You should see healthy target(s) registered to the https target group (note: the number of targets will depending on your ingress-nginx config). 

 

Deploy SAS Viya

I won't provide all the details of the deployment here, as that is out of scope for this blog, but I want to point out a few items:

  1. You will need to determine where you want TLS in transit to terminate.  Remember that with an ALB, TLS will first terminate at the ALB.  This is different from a NLB, which will pass through the TLS to the ingresses (assuming TLS is configured at the ingresses). 
    • As a reminder, you can use the same TLS certificate for both the ALB and the SAS Viya ingresses for either full-stack or front-door TLS configurations (AWS ALB does not validate backend certificates).
  2. Other SAS Viya products, features, or solutions that would require an external client access via a port other than http(s), will not be accessible via the ALB.

 

Contributors
Version history
Last update:
‎02-25-2025 03:01 PM
Updated by:

hackathon24-white-horiz.png

2025 SAS Hackathon: There is still time!

Good news: We've extended SAS Hackathon registration until Sept. 12, so you still have time to be part of our biggest event yet – our five-year anniversary!

Register Now

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags