For the SCIM “push” model to work, the Identity Provider must be able to reach your SAS Viya environment. This post intends to highlight the ways in which we can allow Azure Active Directory access to our SAS Viya environment running on the 3 main public cloud providers; Azure, GCP and AWS. The intention here is not to cover the entire process of SCIM provisioning since not only does our documentation cover that, but more details can be found in the following series of posts by @StuartRogers;
SCIM and SAS Viya SAS Viya SCIM Configuration SAS Viya Azure AD SCIM
Azure
As you would expect, configuring SCIM provisioning for Azure Active Directory to a SAS Viya deployment running inside of Azure is the simplest of the 3 configurations. In your Network Security Group you can simply allow inbound traffic from the “AzureActiveDirectory” Service Tag. One of the main benefits of this, which we will see later, is that this is a dynamic way of allowing Azure Active Directory traffic into our Load Balancer. If the IP ranges used by Azure Active Directory change, then we are unaffected since the AzureActiveDirectory Service Tag will seamlessly reflect those changes.
When installing your ingress controller you can easily configure this by adding the following annotation to the Nginx Load Balancer service; service.beta.kubernetes.io/azure-allowed-service-tags=”AzureActiveDirectory”
For example, installing Nginx via a Helm chart would look something like this;
helm install viya4 \
--version 3.36.0 \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.loadBalancerSourceRanges={"<my-network-cidr>,<nat-ip>/32,<etc.>"} \
--set-string controller.service.annotations."service\.beta\.kubernetes\.io/azure-allowed-service-tags"="AzureActiveDirectory" \
ingress-nginx/ingress-nginx
This command will;
install a specific version of the ingress-nginx Helm chart, since without this it will install the latest version of Nginx which is currently not supported,
add the loadBalancerSourceRanges spec attribute to the Kubernetes Load Balancer Service, meaning the IP ranges listed here will automatically be added to the Network Security Group associated with the Load Balancer and
add the azure-allowed-service-tags Annotation to the Kubernetes Load Balancer Service. The effect of this is that the AzureActiveDirectory Service Tag will be added to the Inbound rules of the Network Security Group associated with the Load Balancer, for a nice dynamic configuration that will require little maintenance.
The result is a Network Security Group rule allowing access for the AzureActiveDirectory Service Tag;
GCP
Outside of Azure we need to start looking at the specific IPs that are used by the Azure Active Directory service, since we cannot simply tell GCP to allow “AzureActiveDirectory” as it has no knowledge of that Service Tag. Thankfully Microsoft provide a JSON file which lists every IP range used by the AzureActiveDirectory Service Tag (as well as every other Service Tag within Azure). This is available at; https://www.microsoft.com/en-us/download/details.aspx?id=56519
The first thing to know about this file is that it is updated almost weekly. So to ensure our security rules remain up-to-date it will likely be necessary to automate some check to ensure that our inbound security rules are aligned with the latest file from Microsoft.
When it comes to installing Nginx it now means that we need to provide a far greater number of IP ranges in the loadBalancerSourceRanges spec, to account for every possible Azure Active Directory source IP. The way in which I achieve this is as follows;
url=$(curl -s https://www.microsoft.com/en-us/download/confirmation.aspx?id=56519 | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d' | grep ServiceTags | head -1)
helm install viya4 \
--version 3.36.0 \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.loadBalancerSourceRanges={"<my-network-cidr>,<nat-ip>/32,$(curl -s ${url} | jq -r '.values[] | select(.id == "AzureActiveDirectory") | .properties.addressPrefixes[]' | sed '/::/d' | sed -z 's/\n/,/g' | sed 's/,*$//g'),<etc.>"} \
ingress-nginx/ingress-nginx
Since the JSON file is regularly updated, I needed some means of ensuring I was always pulling the latest available version (ideally without having to manually browse for it);
the first command parses some HTML output to capture the latest URL and
with that URL captured the loadBalancerSourceRanges spec attribute is updated to query the JSON file from Microsoft using jq to grab the address ranges for the AzureActiveDirectory Service Tag and pass them in a comma-separated list to the VPC Firewall.
The outcome is that GCP will add an inbound rule covering every possible Azure Active Directory source address.
AWS
In theory, AWS should work in the same way as GCP, but it doesn’t.
AWS enforces a limit on the amount of inbound/outbound rules that you can have per Security Group. The default limit is 60 inbound and 60 outbound, for a total of 120 rules per Security Group. At the time of writing, the AzureActiveDirectory Service Tag has 110 different IP ranges, which is not ideal given the limit.
The limit can be increased, but for the same approach used with GCP to work in AWS, the limit would need to be significantly increased. The reason for this is that when you specify an IP in the loadBalancerSourceRanges attribute, AWS will create 3 inbound rules for that IP; a HTTP, HTTPS and Custom ICMP rule. That means that we would need to increase the limit to, at least, 330 rules per Security Group to properly accommodate Azure Active Directory.
Since I’m only interested in inbound traffic from Azure Active Directory via HTTPS I decided to take a different approach.
vpcname=myvpc
vpcid=$(aws ec2 describe-vpcs | jq -r --arg vpcname "$vpcname" '.Vpcs[] | select(.Tags[]? | select(.Value==$vpcname)) | .VpcId')
groupid=$(aws ec2 create-security-group --group-name ukits-azuread --description "Rules to allow SCIM provisioning from Azure AD" --vpc-id $vpcid | jq -r '.GroupId')
url=$(curl -s https://www.microsoft.com/en-us/download/confirmation.aspx?id=56519 | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d' | grep ServiceTags | head -1)
for iprange in $(curl -s ${url} | jq -r '.values[] | select(.id == "AzureActiveDirectory") | .properties.addressPrefixes[]' | sed '/::/d')
do
aws ec2 authorize-security-group-ingress --group-id $groupid --protocol tcp --port 443 --cidr $iprange
done
helm install viya4 \
--version 3.36.0 \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.loadBalancerSourceRanges={"<my-network-cidr>,<natip>/32"} \
--set-string controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-extra-security-groups"="$groupid" \
ingress-nginx/ingress-nginx
Using this approach I;
query the ID of my VPC,
create a new Security Group that will be dedicated to ensuring I can receive inbound traffic from Azure Active Directory,
iterate through each IP range in the aforementioned JSON file, adding that IP range to the Security Group, for HTTPS only and
lastly with the Security Group in place and containing my desired IP ranges I create my ingress controller. This time I include another new annotation – aws-load-balancer-extra-security-groups. This annotation takes my new Security Group and associates it with the ingress Load Balancer.
The result of the above was that I had 110 inbound rules in my Security Group, which is of course more than the default limit of 60. However prior to running the code I increased the rule limit to 200 for simplicity. Without the limit increase I would have needed to distribute the rules across 2 or more Security Groups and then add each one of those to my aws-load-balancer-extra-security-groups annotation. It would have worked, but would have added some additional complexity.
On examining the Load Balancer, we can see the additional Security Group association;
That additional Security Group contains all of the Azure Active Directory source IPs;
Despite the additional work needed the benefit of this approach is that it actually makes it fairly straightforward to ensure I remain up-to-date. Since the Security Group is dedicated to my SCIM ambitions I could easily script a job to run weekly which would clear out all inbound rules and rebuild them from the latest JSON file.
Conclusion
In this post we have explored some possible options to allow Azure Active Directory access to SAS Viya, when public access is denied. For any SAS Viya deployment not on Azure ongoing maintenance will likely be needed to ensure the IP ranges we have allowed through the firewall remain up-to-date, but using some of the code snippets provided here it should not be an overly complex exercise.
... View more