In a previous article, I explained the process of creating a customized web application/page to incorporate Generative AI into a SAS Visual Analytics report. I have also written other articles that discuss various aspects of web development. From time to time, I receive inquiries about the deployment process for integrating the application into SAS Viya. In this article, I will provide a detailed guide on how to create a Docker image using the application developed in my previous article.
After developing an application, whether you have utilized a development framework like ViteJS or simply created a few HTML pages with JavaScript and CSS files, the next step is to deploy the application and make it accessible within your SAS Viya environment. There are several approaches you can take to achieve this:
For the first option, the process is relatively straightforward. You can simply copy your web application into the web server's structure, and your page will be accessible. This is likely the simplest option. However, it is important to consider the domain name of the web server. If the domain matches that of SAS Viya, there is no need to worry as the web server and SAS Viya will trust each other. However, if the domains are different, additional configuration is required to establish trust between the two domains. This can be achieved in SAS Viya by configuring CORS (Cross-Origin Resource Sharing) and CSRF (Cross-Site Request Forgery) as outlined in the following articles:
Sharing for SAS Viya for REST API’s and web developments
All about CORS and CSRF for developing web applications with the SAS Visual Analytics SDK
For the second choice, there is no requirement to possess your own web server. Instead, you can utilize the infrastructure offered by GitHub, GitLab, or any other web hosting companies. The advantage of this method is the ability to store your code in a repository and automate the deployment process. The article below explains how to set up this process for GitHub: Storing web pages on GitHub for consumption inside the SAS Visual Analytics Data-Driven Content obje.... It is also important to configure SAS Viya to trust the GitHub domain as mentioned in the article.
The third option involves deploying the web application/pages using the same cloud infrastructure as SAS Viya. In this scenario, you will need to create a Docker image and deploy it on the preferred cloud provider. Fortunately, the seemingly complex process is actually straightforward, as we will explore in the upcoming sections.
As an example, I will take the project from my previous article. There are several reasons why I chose this project:
ViteJS is a great library that simplifies the development process. Once you have finished developing your project, you need to build the application. The build process generates a distribution for the application, which includes static files. These files can then be deployed on a web server for consumption. If you want to validate the process, it is possible, but I will not cover it in this article. Building a ViteJS application is as easy as running the following command:
npm run build
Executing this command will create a "dist" folder in the project, and the contents of this folder can be copied to a web server. While you can build the project within your editor, it is recommended to include the build process as part of your CI/CD (Continuous Integration/Continuous Deployment) pipeline. This automation not only saves time but also reduces the size of your repository. Only the code of your application should be stored in GitHub, for example, the remaining files like the "node_modules" folder and "dist" folder should not be stored in the repository as they can be generated by running specific commands. Here is the content of the repository seen from my editor:
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
And on GitHub
The "dist" and the "node_modules" are not saved on GitHub. Consequently, during the Docker image construction, the initial action will involve installing the node modules and generating the "dist" folder. Subsequently, the content of the "dist" folder will be copied into the web server as the second step of the process. It is understandable that these tasks can be repetitive, and our aim is to automate them as part of the Docker image creation process. This is precisely the purpose of the "Dockerfile".
The initial step in constructing the "dist" folder takes place between lines 1 and 8. Between lines 10 and 13, the creation of the Docker image that includes a web server and the web application occurs. To elaborate on these lines:
By using the Dockerfile in the repository, you can execute the following command to build the Docker image for our web server.
docker build . -t ddc_genai:v1.0 --build-arg GEMINI_API_KEY=$GEMINI_API_KEY
This docker command will create an image based on the content of the Dockerfile. It will tag the image as ddc_genai:v1.0 and pass the API key. The API key in this case will come from an environment variable. It means that if you have a machine with Docker installed, you should also have an environment variable named GEMINI_API_KEY which contains the API key to access the GenAI provider. The docker command is used to generate an image using the specifications outlined in the Dockerfile. The image will be tagged as ddc_genai:v1.0 and will include the API key. The API key will be sourced from an environment variable: GEMINI_API_KEY. This variable enables access to the GenAI provider.
Bonus: If you have Docker installed on your system, you can leverage it to quickly deploy the application. Once you have built the Docker image, simply run the provided command to start a container and access the newly deployed application.
docker run -d --name ddc_genai -p 3000:80 ddc_genai:v1.0
In this instruction, we give a name to our currently active container and specify that the application is actively running and listening on port 3000 of the host machine. Afterwards, we establish a connection between port 3000 and the internal port of the container, which is configured as 80.
Now that we have established a procedure for generating the Docker image, it is possible to upload it to a repository such as Docker Hub or GitHub Packages. This article will outline the steps for uploading to DockerHub. Although the instructions are tailored for DockerHub, you can apply the same guidelines if you wish to upload the image to your own harbor. Keep in mind that since the build process involves the API key, making the image public on DockerHub will result in it being available to all users, and you may incur charges for Gemini usage. Therefore, this demonstration is purely for illustrative purposes and should not be implemented as is.
The workflow is triggered by every push to the "main" branch of the repository. It utilizes the latest version of Ubuntu to run all commands. Initially, the repository is checked out to prevent conflicting updates. Subsequently, authentication is performed against DockerHub using the provided username and password. Lastly, the build command is executed, and the image is pushed to DockerHub with the necessary argument. Please note that lines 18, 19, and 26 contain references to environment variables secrets.xxx, which are securely stored within the GitHub repository and will not be revealed. These variables are used for sensitive information like usernames, passwords, or API keys. They can be defined in the repository like this:
Once you have included the yml file in your repository, such as the one mentioned above located in github/workflows/, the workflow will be activated whenever there is a push to the "main" branch of the repository. For further details on GitHub Actions, please refer to the documentation.
As demonstrated in this article, the procedure for constructing a Docker image from an existing project is simple. You have the option to streamline this process using CI/CD methods provided by your Git service provider. Once the image is ready, your Kubernetes administrator can deploy it in the Kubernetes cluster like any other application. It is their responsibility to determine the most suitable configurations to adhere to the existing security standards. You can utilize the same namespace as SAS Viya or establish a separate namespace for your application. Opting to deploy it under the same Ingress controller is the easiest choice as it eliminates the need to configure CORS and CSRF. However, if you opt for deployment under a different domain, ensure to configure SAS Viya correctly.
Now, you are equipped with the knowledge to automate the image building process. You have the flexibility to generate the image whenever needed by pushing code to the "main" branch of your repository. You should not need to modify this process unless you intend to introduce additional parameters to the build process. If permitted by the administrator, you can even automate the application deployment with each push, although this may impact the production environment. I strongly advise incorporating more checks in the deployment process to verify the security of your application. This topic goes beyond the scope of this article as it is contingent on how your administrator enforces security in your deployments. Other articles related to deploying web applications are:
Deploy a custom web application in the cloud for Data-Driven Content object in SAS Viya 4
Deploy DDC Implementation Files in SAS Content Server via SAS Viya GUIs
Other articles related to web developments:
An approach to SAS Portal in Viya
Develop web applications series: Options for extracting data
Save $250 on SAS Innovate and get a free advance copy of the new SAS For Dummies book! Use the code "SASforDummies" to register. Don't miss out, May 6-9, in Orlando, Florida.
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.