BookmarkSubscribeRSS Feed

SAS Viya Quickstart on AWS: How to deploy when a proxy is in place

Started ‎08-28-2020 by
Modified ‎08-28-2020 by
Views 2,116



The quickstart is a way to stand up a Viya environment within AWS. This is true when you follow the path that the quickstart lays down for you, but when you decide to take a left or right and go onto the beaten path, you will soon realize that you need to modify the quickstart. Luckily all the code is available on GitHub, meaning that you can modify it to suit your needs.




By default the quickstart creates an infrastructure for you that is based on best practices created by AWS solution architects. You have the option to deploy it into a newly created Virtual Private Cloud (VPC), or you can deploy it into an existing one. Either way, once you deploy the quickstart you will end up with the following infrastructure.





SAS Viya quickstart: architectureSAS Viya quickstart: architecture


This infrastructure is fine for most use cases, but what if your organization has hardened their AWS environment? The quickstart -- in it's current shape -- might not work for you. Let's assume that your organizations has the following requirements in place:

  • Not allowed to deploy into a public subnet
  • All internet traffic from your instances needs to be routed through a proxy

As you can see in the above picture, the quickstart by default deploys one instance (ansible controller) into a public subnet and two instances into a private subnet. And instead of using a proxy to route all internet traffic coming from your instances, it uses an NAT gateway for internet traffic.


Does that mean that you won't be able to use the quickstart to quickly stand up a Viya environment? Well in it's current shape, the answer to that question would be YES. However as mentioned previously, the code used by the quickstart is open source and available through GitHub. That means that anybody can modify the code to suit their needs, which I have already done for you!


Within the next part of this article, I will walk you through the steps to use my modified quickstart code to deploy Viya into an environment that has been hardened.


Getting started


Most organizations will already have a VPC and proxy in place. For those of you who already have a proxy running, you can skip ahead to step 3. But for those of you who would like to try out my modified quickstart and don't have a proxy, I have included instructions on how to setup the VPC and create a proxy server.

  1. Creating a VPC
  2. Deploying a proxy server
  3. Deploying SAS Viya


Creating a VPC


The first thing we need to create is a VPC. The modified quickstart that I have created assumes that the VPC already exists. For this purpose we will create a VPC with:

  • A public subnet
  • A private subnet

The public subnet has access to the internet. This is the subnet where we will deploy our proxy. Within the private subnet, we will deploy all three of the instances that are created by the quickstart.


Follow these instructions to create a VPC using the VPC wizard that AWS provides. This is an easy way to create the network topology that we are after. However, the wizard also deploys an NAT gateway and sets up the routing to this NAT gateway for your private subnet. We don't want to use the NAT gateway, so we are going to:

  • Remove the route from the private subnet to this NAT gateway
  • Remove the NAT gateway
  • Remove the public IP


Removing the route


To remove the route from the route table go to Services -->VPC. Select the option Route Tables from the menu bar and you will see the following screen.




Select the correct route table and remove the following line



 Remove the NAT gateway


To remove the NAT gateway go to Services --> VPC. Select the option NAT Gateway. from the menu bar and you will see the following screen



Select Actions --> Delete NAT gateway


Remove the public IP


When the NAT gateway is created, a public IP is automatically created for you. After deleting the NAT gateway the public IP will still remain. Because the IP currently isn't in use, it will cost you X dollars a day. As we are no longer using it we can delete it. Go to Services --> EC2. Select Elastic IP's. Select the IP that is no longer associated with anything and delete it by selecting Release Elastic IP.


Deploying a proxy


Once the VPC has been created, we can create a proxy server and deploy that proxy server into the public subnet. You will first need to create an instance. Follow these instructions on how to create an instance within AWS. Make sure you use the following parameters when deploying the instance:

  • OS: Ubuntu 18
  • Instance Type: t2.micro
  • Disk size: 8GB

During the creation of your instance, you can also add a rule to the security group that allows you to SSH into your instance. Now that the instance has been created we can move on to the next step.To allow incoming connections from our private subnet to our proxy server, we need to do two things:

  1. Add the CIDR block of our private subnet into the configuration of proxy server
  2. Add that same CIDR block to the security group that is associated

So we need to figure out what the CIDR block for our private subnet is. To do this go to Services --> VPC**. Select **Subnets.





Make of note of the CIDR for the private subnet. In my case this is But this might differ for your VPC.


Adding the CIDR to the security group


To enable all of your instances within your private subnet to access the internet through your proxy, we need to whitelist the IP's that are used within your private subnet. This can be done by adding a rule to the security group that is attached to your instance on which we are going to install the proxy server.


This can be done by going to Services --> EC2. Select Security Groups. Then make sure you select the group that is associated to the instance where the proxy server will be installed. Click on Edit and then define the rule as shown here in the screenshot





Make sure the CIDR matches yours


Adding the CIDR to the configuration of our proxy server


After the instance is up and running, log on to the instance using an SSH client. Once logged on, you are ready to install the proxy server


#update packages
sudo apt update
sudo apt upgrade

#install tinyproxy
sudo apt install tinyproxy

#edit configuration
sudo vim /etc/tinyproxy/tinyproxy.conf


The last command will open up a text editor to edit the configuration of tinyproxy. To allow the instances that are deployed within our private subnet to use the proxy we have created, we need to allow incoming connections from our private subnet. We have already done so on the AWS side by using a security group. However we also need to modify the configuration of tinyproxy as shown here in this screenshot:





Modify the CIDR block to match yours. Now save the file and for our changes to take effect, we will need to restart the tinyproxy service


sudo service tinyproxy restart


Make sure that the proxy is up and running again. Now that the proxy has been restarted, we can deploy the quickstart,


Deploying the quickstart


Remember those modifications I made to the quickstart. We are finally going to put them to use. The quickstart uses a cloudformation template to create the underlying infrastructure on which SAS Viya will be installed. The modifications I have made are to those templates. I have made the following modifications:

  • Added environment variables to the OS to allow YUM and CURL to use the proxy to access the internet
  • Added environment variables to Ansible, to allow commands within the playbooks to access the internet

Attached to this blog  you will find two files 

  1. aws-proxy-quickstart.yaml
  2. sas-viya.casworker.template.yaml

Both of these files will need to be uploaded to AWS. The easiest route to take is to create an S3 bucket. And then upload those files into that bucket. We will need this bucket in a later step of the quickstart anyway to store our SAS Viya order data.




I'm assuming that you already know how to create an S3 bucket, but if you don't please use these instructions to get you going. After creating the bucket, you can upload the two files I just mentioned. See this link for more information on how to upload files.



Once the files are uploaded to the bucket please read these instructions. This link contains information about the pre-requisites of the quickstart, including instructions on how to upload your SAS Viya order data and how to create a mirror.


Note: at this moment it is MANDATORY that you create a mirror. My modifications to the quickstart have only been tested with a mirror.


Launching the quickstart


To launch the quickstart we will use the CloudFormation template saved on your S3 bucket. To open this template go to Services --> CloudFormation. This will bring up the CloudFormation GUI. Make sure that you have all the prerequisites in place. Be sure you've:

  • Created an SSH key
  • Uploaded a mirror of your Viya repository to an AWS S3 bucket
  • Uploaded your SAS Viya order data

Once you are ready, select the Create Stack with new resources option





This brings up a screen that allows us to select which CloudFormation template we want to use. Conveniently it allows us to select one which has been uploaded to S3.





Make sure that you replace the URL that is highlighted here by the red box, with the one you are using. Click on Next. This will bring up the next menu where you can configure the parameters of the quickstart.





I will only highlight the ones that are relevant for the modifications I have made to the quickstart. To read more about what the other parameters are, please refer to this guide.


The ones that are relevant for my modifications to the quickstart are:

  • Permitted IP Range for Application Access
  • Permitted IP Range for Deployment Administrator
  • Mirror of SAS Viya Deployment Data
  • Location of your proxy
  • Your bucket that contains the cas worker cloudformation template

Let me briefly review these with you:


Parameter Description
Permitted IP Range for Application Access This is a range of IP addresses that are allowed to connect to the web applications. Set this to a CIDR block from which your clients get an IP address. This will allow to access the Viya web applications. In my example, this would be
Permitted IP Range for Deployment Administrator This is a range of IP addresses that are allowed to connect to the ansible controller. Set this to a CIDR block from which your admins will connect to this machine.
Mirror of SAS Viya Deployment Data This should point to the bucket that contains our mirror. This should be in the format s3://bucketname
Location of your proxy This is the address of your proxy server. This should be in the format http(s)://
Your bucket that contains the cas worker cloudformation template   This is the S3 bucket where you uploaded the cloudformation templates. This should be in the format s3://bucketname


After providing all of the required parameters, you are now ready to deploy Viya. If all went well you should end up with 3 or more instances:

  • one ansible controller
  • one services machine
  • one or more CAS machines




Thanks to some modifications to the quickstart, we are now able to deploy SAS VIya into a private subnet and configure our deployment to use a proxy to access the internet. I'm interested in hearing about your experiences with the quickstart and would like to hear about what modifications you might have done! Let me know in the comments!

Version history
Last update:
‎08-28-2020 04:53 AM
Updated by:


Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.


Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags