BookmarkSubscribeRSS Feed

Deployment Automation series #3: Common Pitfalls

Started ‎12-07-2023 by
Modified ‎12-07-2023 by
Views 402

In parts one and two of this blog series we’ve discussed the “Why” and “How” to get started with deployment automation. This third installment focuses on some of the common pitfalls you may encounter as you embark on creating your own automations in deploying SAS environments.

 

Perfect is the enemy of Good

Creating a pipeline that works in all circumstances and encompasses all steps in a deployment is a complicated task. Trying to create the perfect pipeline from the get-go is a plan destined to fail. Like the move from waterfall projects to more agile projects, creating a pipeline in a step-by-step approach is a better way to get started.

 

The previous blog post showed the starting points you can use when creating a pipeline. Using these resources will kickstart your pipeline creation. So start out with deploying your infrastructure in a minimal viable configuration. 

Pic7.png

You may have several internal guidelines or SAS Viya design implementation decisions that you have to follow, but these should be implemented in a step-wise approach. The SAS IaC repositories contain sample files you can use to deploy a minimal set of infrastructure. See for instance the sample file for the IaC scripts for Microsoft Azure.

 

The same principal should be applied to deploying software onto the infrastructure as well. Whether it is supporting software like the NGINX ingress controller or the SAS software itself, again start out with the default configuration. Once you’ve got the initial deployment of the software out of the way, iteratively applying changes to the software makes for a much more pleasant experience. Like the IaC repositories, the SAS documentation also contains an initial kustomization file you can use to deploy the default configuration of the SAS software.

 

In short, start simple and improve over time. Redeploying software or infrastructure does not have to be a time-consuming process anymore, you’ve got automation for that.

 

Dependencies

So, you’ve followed all the advice dispensed so far and built a fantastic pipeline that can deploy your infrastructure and software automatically according to all the requirements and specifications you were given. Time to move on to the next project, right?

danielkuiper_0-1700581607577.png

Not quite. You see, all these fantastic automations have brought in a ton of inter-dependencies.

When any of these technologies need to be updated, it may trigger an update to many other components. Even if you want to remain completely static, you may be forced by outside influences to perform updates. Whether these are security vulnerabilities that need to be addressed or deprecations instigated by your cloud provider, it is up to you to address these changes.

 

Your approach should be similar to your initial development of your automation. Try to avoid having to do many changes at the same time, as the risk that something will fail will be much higher.

 

Perform regular maintenance on all these components to stay on top of a changing world around you. It is much more relaxed to be able to update a component at your discretion and following your timelines than having to perform multiple updates with your back against the wall with a looming deprecation over your head.

 

Short but sweet

 We would like to end this series of blog posts with a number of short and sweet pieces of advice:

  • As already stated in this blog start small and grow your pipeline over time.
  • When using Terraform to deploy your infrastructure, make backups of your state file. It is a nightmare to reconstruct. (Yes, we’ve been there)
  • If you need to have differences in your environments, first develop a common pipeline definition and save its configuration in Git. Next, fork the repository for every environment you need. This way you can make generic changes in the common definition and make specific changes in the forked repositories.
  • You can also use a fork to test out patches to your deployment pipeline before backporting them into your common pipeline definition. After changes have been backported, they can be brought into the other pipelines by rebasing them.

 

Conclusion

In this blog series about the deployment automation process we have provided you with you with information about why and how to get started with deployment automation. In this last post of the series we provided guidelines to prevent you from running into some common pitfalls. The next step is for you: Get started with your automation journey.

 

 

Articles in this deployment automation series

Part 1: Deployment Automation - Why? 

Part 2: How do I start with SAS Viya deployment Automation?

Part 3: What are common pitfalls when starting with deployment automation?

 

 

Access the 3-Part Series

 

 

 

 

Version history
Last update:
‎12-07-2023 06:12 AM
Updated by:

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags