BookmarkSubscribeRSS Feed

Build a Docker image for your custom web application using Git Actions

Started a month ago by
Modified 3 weeks ago by
Views 330

In a previous article, I explained the process of creating a customized web application/page to incorporate Generative AI into a SAS Visual Analytics report. I have also written other articles that discuss various aspects of web development. From time to time, I receive inquiries about the deployment process for integrating the application into SAS Viya. In this article, I will provide a detailed guide on how to create a Docker image using the application developed in my previous article.

 

Deploying custom web applications in SAS Viya

 

After developing an application, whether you have utilized a development framework like ViteJS or simply created a few HTML pages with JavaScript and CSS files, the next step is to deploy the application and make it accessible within your SAS Viya environment. There are several approaches you can take to achieve this:

 

  1. Hosting the files on a standalone web server alongside other web applications/pages.
  2. If you don't have access to a web server, you can choose to host your files on GitHub Pages or a specific web hosting company.
  3. Alternatively, if a web server is not available, you can deploy your application in the cloud using the same infrastructure as SAS Viya.

For the first option, the process is relatively straightforward. You can simply copy your web application into the web server's structure, and your page will be accessible. This is likely the simplest option. However, it is important to consider the domain name of the web server. If the domain matches that of SAS Viya, there is no need to worry as the web server and SAS Viya will trust each other. However, if the domains are different, additional configuration is required to establish trust between the two domains. This can be achieved in SAS Viya by configuring CORS (Cross-Origin Resource Sharing) and CSRF (Cross-Site Request Forgery) as outlined in the following articles:

 

Sharing for SAS Viya for REST API’s and web developments

 

All about CORS and CSRF for developing web applications with the SAS Visual Analytics SDK

 

For the second choice, there is no requirement to possess your own web server. Instead, you can utilize the infrastructure offered by GitHub, GitLab, or any other web hosting companies. The advantage of this method is the ability to store your code in a repository and automate the deployment process. The article below explains how to set up this process for GitHub: Storing web pages on GitHub for consumption inside the SAS Visual Analytics Data-Driven Content obje.... It is also important to configure SAS Viya to trust the GitHub domain as mentioned in the article.

 

The third option involves deploying the web application/pages using the same cloud infrastructure as SAS Viya. In this scenario, you will need to create a Docker image and deploy it on the preferred cloud provider. Fortunately, the seemingly complex process is actually straightforward, as we will explore in the upcoming sections.

 

Building a Docker image

 

As an example, I will take the project from my previous article. There are several reasons why I chose this project:

 

  1. A colleague asked me for guidance on deploying it in SAS Viya.
  2. The project is based on ViteJS and cannot be deployed as it is.
  3. It contains an API key that needs to be stored and used for building the application.

 

ViteJS is a great library that simplifies the development process. Once you have finished developing your project, you need to build the application. The build process generates a distribution for the application, which includes static files. These files can then be deployed on a web server for consumption. If you want to validate the process, it is possible, but I will not cover it in this article. Building a ViteJS application is as easy as running the following command:

 

npm run build

 

Executing this command will create a "dist" folder in the project, and the contents of this folder can be copied to a web server. While you can build the project within your editor, it is recommended to include the build process as part of your CI/CD (Continuous Integration/Continuous Deployment) pipeline. This automation not only saves time but also reduces the size of your repository. Only the code of your application should be stored in GitHub, for example, the remaining files like the "node_modules" folder and "dist" folder should not be stored in the repository as they can be generated by running specific commands. Here is the content of the repository seen from my editor:

 

xab_1_DeployDocker_gitCode.png

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

And on GitHub

 

xab_2_DeployDocker_gitRepo.png

 

The "dist" and the "node_modules" are not saved on GitHub. Consequently, during the Docker image construction, the initial action will involve installing the node modules and generating the "dist" folder. Subsequently, the content of the "dist" folder will be copied into the web server as the second step of the process. It is understandable that these tasks can be repetitive, and our aim is to automate them as part of the Docker image creation process. This is precisely the purpose of the "Dockerfile".

 

xab_3_DeployDocker_Dockerfile.png

 

The initial step in constructing the "dist" folder takes place between lines 1 and 8. Between lines 10 and 13, the creation of the Docker image that includes a web server and the web application occurs. To elaborate on these lines:

  1. Utilize a "builder" container that is based on a Linux Alpine image with NodeJS already installed.
  2. Specify the directory that will contain all the files in the app folder.
  3. Define an argument that will be passed to the command line during the creation of the Docker image.
  4. Store the command line argument in an environment variable within the Docker.
  5. Copy the package.json file from our repository to the app folder. This package.json file contains the necessary information for installing the node modules and specific commands for this project.
  6. Execute npm install to install the node modules, which will create the "node_modules" folder.
  7. Copy the contents of the repository into the "workdir".
  8. Execute npm build to generate the "dist" folder.
  9.  
  10. Utilize a different Docker image based on Linux Alpine with Nginx web server already installed.
  11. Copy the content of the "dist" folder from the builder image to the location where Nginx stores the files to be rendered on the web.
  12. Provide details about the port that is exposed for the Nginx web server.
  13. Start the Nginx server.

 

By using the Dockerfile in the repository, you can execute the following command to build the Docker image for our web server.

 

docker build . -t ddc_genai:v1.0 --build-arg GEMINI_API_KEY=$GEMINI_API_KEY

 

This docker command will create an image based on the content of the Dockerfile. It will tag the image as ddc_genai:v1.0 and pass the API key. The API key in this case will come from an environment variable. It means that if you have a machine with Docker installed, you should also have an environment variable named GEMINI_API_KEY which contains the API key to access the GenAI provider. The docker command is used to generate an image using the specifications outlined in the Dockerfile. The image will be tagged as ddc_genai:v1.0 and will include the API key. The API key will be sourced from an environment variable: GEMINI_API_KEY. This variable enables access to the GenAI provider.


 

Bonus: If you have Docker installed on your system, you can leverage it to quickly deploy the application. Once you have built the Docker image, simply run the provided command to start a container and access the newly deployed application.

 

docker run -d --name ddc_genai -p 3000:80 ddc_genai:v1.0

 

In this instruction, we give a name to our currently active container and specify that the application is actively running and listening on port 3000 of the host machine. Afterwards, we establish a connection between port 3000 and the internal port of the container, which is configured as 80.


 

Publish the image to Docker Hub

 

Now that we have established a procedure for generating the Docker image, it is possible to upload it to a repository such as Docker Hub or GitHub Packages. This article will outline the steps for uploading to DockerHub. Although the instructions are tailored for DockerHub, you can apply the same guidelines if you wish to upload the image to your own harbor. Keep in mind that since the build process involves the API key, making the image public on DockerHub will result in it being available to all users, and you may incur charges for Gemini usage. Therefore, this demonstration is purely for illustrative purposes and should not be implemented as is.

 

xab_4_DeployDocker_DockerPush.png

 

The workflow is triggered by every push to the "main" branch of the repository. It utilizes the latest version of Ubuntu to run all commands. Initially, the repository is checked out to prevent conflicting updates. Subsequently, authentication is performed against DockerHub using the provided username and password. Lastly, the build command is executed, and the image is pushed to DockerHub with the necessary argument. Please note that lines 18, 19, and 26 contain references to environment variables secrets.xxx, which are securely stored within the GitHub repository and will not be revealed. These variables are used for sensitive information like usernames, passwords, or API keys. They can be defined in the repository like this:

 

xab_5_DeployDocker_secrets.png

 

Once you have included the yml file in your repository, such as the one mentioned above located in github/workflows/, the workflow will be activated whenever there is a push to the "main" branch of the repository. For further details on GitHub Actions, please refer to the documentation.

 

Conclusion

 

As demonstrated in this article, the procedure for constructing a Docker image from an existing project is simple. You have the option to streamline this process using CI/CD methods provided by your Git service provider. Once the image is ready, your Kubernetes administrator can deploy it in the Kubernetes cluster like any other application. It is their responsibility to determine the most suitable configurations to adhere to the existing security standards. You can utilize the same namespace as SAS Viya or establish a separate namespace for your application. Opting to deploy it under the same Ingress controller is the easiest choice as it eliminates the need to configure CORS and CSRF. However, if you opt for deployment under a different domain, ensure to configure SAS Viya correctly.

 

Now, you are equipped with the knowledge to automate the image building process. You have the flexibility to generate the image whenever needed by pushing code to the "main" branch of your repository. You should not need to modify this process unless you intend to introduce additional parameters to the build process. If permitted by the administrator, you can even automate the application deployment with each push, although this may impact the production environment. I strongly advise incorporating more checks in the deployment process to verify the security of your application. This topic goes beyond the scope of this article as it is contingent on how your administrator enforces security in your deployments. Other articles related to deploying web applications are:

 

Deploy a custom web application in the cloud for Data-Driven Content object in SAS Viya 4

 

Storing web pages on GitHub for consumption inside the SAS Visual Analytics Data-Driven Content obje...

 

Deploy DDC Implementation Files in SAS Content Server via SAS Viya GUIs

 

Other articles related to web developments:

 

An approach to SAS Portal in Viya

 

Develop web applications series: Options for extracting data

Version history
Last update:
3 weeks ago
Updated by:
Contributors

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags