BookmarkSubscribeRSS Feed

Hackin’SAS (the good way)

Started ‎11-04-2021 by
Modified ‎11-04-2021 by
Views 3,920

Earlier this year we had the exciting opportunity to contribute to one of SAS’ largest online events which took place this year (and no, we’re not talking about the SAS Global Forum here …).

 

Unlike in previous years, where Hackathons were regional events, 2021 was the year of the first SAS Global Hackathon where participants from all over the globe would join in teams to tackle “solutions to big business and humanitarian challenges, all while experimenting with the latest analytics and cloud software” as it was stated in the announcement.

 

To give you a rough idea of the truly impressive scale of this event: we saw more than 1.100 registered participants from almost 80 different countries of which more than 750 estimated themselves as being data scientists. We asked the participants to flock together to a total of 100 teams to work on a dedicated use-case over a 4 weeks period. The teams chose their use-cases from a wide range of ideas and many of them worked on world’s most critical issues like cancer research, sustainable nutrition or energy utilization & CO2 Emission control. For more information we’d suggest that you spend a few minutes to check this page which lists the Hackathon winners and their use-cases.

 

Not one-size-fits-all

 

Without doubt, hosting an event of this size can be an organizational and a logistical challenge. All accepted teams were eligible to work on dedicated SAS Viya environments running in Microsoft’s Azure cloud for the duration of the Hackathon and we were tasked with provisioning this infrastructure. Some basic parameters were clear right from the start: since this was a global event, the infrastructure needed to be globally distributed as well in order to be as close to the teams as possible. Hence, running these Viya environments in the cloud was an obvious choice. This diagram shows the Azure data centers we used for deploying the SAS Viya environments.

 

hblog1.png

 

Secondly, due to the diversity of use-cases which often demanded the latest state-of-the-art analytical capabilities, we knew that we had to deploy the latest and most feature-rich release of SAS Viya, which is a cloud native software platform running on a Kubernetes infrastructure. We decided that each team would have its’ own Kubernetes cluster so that we would not risk to encounter any “noisy neighbour” effects.

 

Finally it became clear after reviewing the submitted use-case proposals that an one-size-fits-all approach would not be the best choice. The requirements were just too different to cover them all in a single “template”. For example, some teams needed to work on large amounts of data while others didn’t. Moreover, not all teams were of the same size and when looking at the functional requirements we had requests for almost anything you could think of: computer vision capabilities, AI and Machine Learning algorithms, Optimization and Forecasting, Streaming Analytics, Non-SAS programming languages (Python, R) and much more.

 

For these reasons we decided to offer two different types of SAS Viya environments with quite distinct characteristics which we called “Platform” and “Platform Plus”. The “Platform” environments were designed as self-service offerings for smaller teams with lower requirements in terms of data while the “Platform Plus” environments were more powerful and centrally maintained by us. In the next sections we’re going to describe the technical details for both types of environment, but be sure to also check out this nice blog authored by Frederik about the design factors which were imperative for our work.

 

The SAS Viya “Platform” environments

 

The “Platform” types of environments were designed for teams with 2-3 concurrent users which aimed at working on less than 20 GB data (largest table). From a technical perspective they were quite unusual because we deployed the SAS Viya platform stack on a single-node Kubernetes cluster, which in turn allowed us to provision everything on a single virtual machine. Of course, these VMs needed to be powerful enough to host all components of a SAS Viya environment, including the CAS in-memory server. The Standard_E32s_v4 machine types seemed to be a good fit for this, as they come with 32 vCPUs and 256GB of RAM. The machines in Azure’s Esv4 series run on the Intel® Xeon® Platinum 8272CL (Cascade Lake) with a clock speed of 3.4 GHz.

 

hblog2.png

 

Being cloud-native, SAS Viya requires a Kubernetes infrastructure as it’s “operating system”. We chose to use the open-source upstream release of Kubernetes which is rather easy to deploy even on a single machine where the same node takes over both roles of being a master and a worker. This is certainly not a best-practice as you loose elasticity and fault tolerance capabilities on the infrastructure level, but in this specific case it did fit the bill.

After preparing the base image we used the Azure Image Gallery service to replicate the image to multiple Azure data centers in different regions (US, EMEA, APAC), because we wanted the VM instances to be running close to the teams' home location to reduce network latencies.

 

How did we make the environments available to the teams? Given the large number of teams using the “Platform” type of environments (almost 50), we needed to find an automated approach but at the same time we also wanted to give them full control over their environment’s lifecycle. For that reason we developed a self-service Launcher web application which served 3 purposes:

 

  • to create, start, stop and terminate the team’s virtual machine
  • to manage the IP address whitelist (for security reasons each instance was protected by a network security group)
  • and finally we asked teams to consider setting up a schedule when their environment should be stopped automatically in order to save some costs and energy (e.g. during the night or weekend).

 

hblog3.png

 

One important piece of the puzzle is still missing: how did teams make data available to their environments and where did we store that data? Of course, they could use the built-in capabilities for uploading data through the visual interfaces, but this option has limits in terms of the accepted file sizes. Storing the data on the virtual machine could also be risky because we allowed teams to terminate and re-create their environments (in which case the data could have gone lost).

 

Picture1.png

 

The solution for this problem was to attach external cloud storage to each machine, so each team had access to its’ own file share which would be available throughout the whole Hackathon. Finally we asked the teams to install the Azure Storage Explorer tool on their local Laptops to access their shares. This provided a convenient way to upload larger amounts data which would then be available for being processed by SAS Viya.

 

The SAS Viya “Platform Plus” environments

 

Up to this point, we described how the smaller "Platform" environments were automatically created, stopped and started. But we implemented more automation as we decided to automate the deployment of the larger environments as well, i.e. the "Platform Plus" environments. As said before the large environments were meant to support teams with more users and more data. For that reason we deployed SAS Viya on Microsoft Azure AKS clusters. Those clusters consisted of multiple node-pools, each having enough nodes with sufficient hardware resources.

 

hblog5.png

 

By making use of Kubernetes taints and labels we applied the recommended SAS workload placement. That way it’s feasible to distribute dedicated SAS workload across the different nodes. The setup of the infrastructure and the deployment of Viya was done by making use of the SAS IaC  & SAS Viya4-deploytment scripts.  To make life easy we automated the run of those scripts with Azure DevOps pipelines. The technical details around how that was done can be found on this SAS communities article. Or check this video if you want to see a deployment in action.

 

Since the Hackathon we are keeping the installation pipelines up to date so that they make use of the latest IaC and Viya4 scripts. That way we can always install the latest releases and/or cadences of Viya. And most importantly the Azure DevOps pipelines continue to proof their value and are helpful to teams setting up analytical “sprints” or demonstrations, or for conferences and events, or for students or many other uses.

Any run of the Azure DevOps pipeline gives us all green checks when the deployment was done successfully:

 

hblog6.png

 

The Azure DevOps pipeline consists out of stages (initialize, iac_setup, baseline and iac_destroy). Each stage will run different tasks.

 

hblog7.png  

 

During or after the run of the pipeline it’s easy to follow up if the different tasks are executed as expected:

 

hblog8.png

 

Wrapping up

 

Providing the technical infrastructure for the SAS Global Hackathon 2021 certainly was a challenging task, but it also was fun to do and a great learning experience of using cloud technologies “at scale” as well. Even more important, the feedback we received from the participants of this event was very positive. In fact, this event has been so successful that it raised the appetite for more - so be on the lookout for more Hackathons coming up in 2022.

 

Hoping to see you there!

 

 

Version history
Last update:
‎11-04-2021 05:38 AM
Updated by:
Contributors

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags