BookmarkSubscribeRSS Feed
BojanBelovic
SAS Employee

CI360 Agents are Java programs/executables that enable and facilitate CI360 integration with external systems, that are generally light weight and can run anywhere as long as they have connectivity to CI360 as well as any target internal or 3rd party systems they need to connect to.

In many cases, although not all, they can be stateless and don’t require a lot of resources. This makes them good candidates for containerizing. When deployed and run as containers, agents can be easier to deploy, manage and scale, and leverage existing container management and deployment platforms and infrastructure.

In this post, we look at how to deploy CI360 custom agents in containers, either in a simple Docker deployment or into a Kubernetes environment.

 

This was written a while ago by two of us (Rob Sneath and Bojan Belovic) as part of internal Customer Intelligence knowledge base, but we believe it's useful for a broader Customer Intelligence 360 community. 

 

Where examples were necessary, we used our CI360 CAS Agent. Any references to CAS (SAS Cloud Analytic Services) streaming agent are just for illustration purposes, and this approach is applicable to any custom CI360 agent.

 

Prerequisites

In our discussion around deploying agents as containers, we will assume that basic prerequisite is existence of a runnable agent, packaged with all its dependencies and necessary files. Several agents developed within SAS Customer Intelligence practice can be found on GitHub with Dockerfiles included that make this process easier.

 

Complete agent distribution generally contains:

  • Main agent JAR file(s)
  • dependencies/libraries
  • Start script, stop script and/or known command lines for starting the agent (which is used for Docker entry point)
  • A command line to ensure agent is running (health check)

It is assumed that all the above are in place and that agent can already be executed standalone.

 

Standalone CAS Agent Example

Since we will be using our CAS agent as an example of agent container – here’s what it looks like and how it runs standalone:

 

-rw-r--r-- 1 bobelo     570 Mar 26  2021 Dockerfile

-rw-r--r-- 1 bobelo   10695 Dec  8 14:53 README.md

-rw-r--r-- 1 bobelo     769 Aug 11 12:20 agent.config

-rw-r--r-- 1 bobelo   36604 Dec  8 16:53 ci360-cas-agent-21.11.1.jar

drwxr-xr-x 1 bobelo       0 Dec  8 16:53 dependency/

drwxr-xr-x 1 bobelo       0 Sep 22 11:36 k8/

-rw-r--r-- 1 bobelo    3586 Jan 26  2021 logback.xml

drwxr-xr-x 1 bobelo       0 Aug 12 07:06 logs/

-rw-r--r-- 1 bobelo     117 Mar  9  2020 run_agent.cmd

-rwxr-xr-x 1 bobelo     321 Apr 17  2020 run_agent.sh*

-rw-r--r-- 1 bobelo      44 Mar 23  2020 status.sh

-rwxr-xr-x 1 bobelo     190 Feb 23  2021 stop_agent.sh*

 

The run_agent.sh script contains the java executable line that starts the agent:

 

 

java -Dlogback.configurationFile=logback.xml -DconfigFile=agent.config -jar ci360-cas-agent-21.11.1.jar

 

 

Building Docker Image

Some of our agents contain a Dockerfile as part of distribution or Git repository. In this case, Dockerfile does not need to be created. Provided Dockerfile should be sufficient without modification for most use cases but may have to be modified in some cases.

 

If the Dockerfile needs to be created for an agent that doesn’t have one (or if deploying a brand-new custom agent into a container), here are some basics about creating Dockerfiles:

 

Sample Dockerfile

 

 

FROM openjdk:8-jre-alpine
LABEL version="${project.version}"
LABEL company="SAS Institute"
# build arguments
ARG VERSION=${project.version}
# copy app into image
COPY . /opt/ci360-cas-agent/
# set working dir
WORKDIR /opt/ci360-cas-agent
# set HEALTHCHECK
HEALTHCHECK CMD ps -ef | grep java | grep ci360-cas-agent
# run application with this command line
ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-DconfigFile=agent.config", "-Djavax.net.ssl.trustStore=viya4_trustedcerts.jks", "-Xms32m", "-Xmx2048m", "-jar", "ci360-cas-agent-${project.version}.jar"]

 

 

Dockerfile instructions explained:

 

FROM - The FROM instruction sets the Base Image from which our new image is built. Every Dockerfile must start with a FROM instruction

 

COPY - The COPY instruction copies new files or directories from source directory (first argument) and adds them to the filesystem of the container at the destination path (second argument)

 

WORKDIR - The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If we omit the WORKDIR instruction, we simply have to specify full path for all files referenced in ENTRYPOINT.

 

ENTRYPOINT - An ENTRYPOINT allows you to configure a container that will run as an executable. Command line specified as ENTRYPOINT will be executed when container is started.

 

HEALTHCHECK - The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. When a container has a health check specified, it has a health status in addition to its normal status. This status is initially "starting". Whenever a health check passes, it becomes "healthy" (whatever state it was previously in). After a certain number of consecutive failures, it becomes "unhealthy".

 

Building the image

 

From top directory of agent distribution (where distribution archive has been unzipped), run the following command:

 

docker build -t ci360-cas-agent .

 

This will read Dockerfile in current directory and build the image in local Docker repository with specified name.

 

Publishing Docker Image

Once image has been built, it can optionally be pushed/published to your chosen container registry.  In the example below we are pushing the image to a private AWS container registry:

 

Tag the image: 

 

docker tag ci360-cas-agent:latest xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/ci360-cas-agent:latest

 

 

Push the image: 

 

docker push xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/ci360-cas-agent:latest

 

 

Once pushed to a registry, an image can easily be used by others to deploy and run agent containers without the need to build the image themselves or even download agent distribution files.

 

Running Docker Container

Since we can run multiple containers based on image we built, it is recommended that agent configuration file be maintained separately, on the host machine.

 

In our CAS Agent example, all agent files, including agent.config file, are located in /opt/ci360-cas-agent within the image. That means we could mount an external file to that location (/opt/ci360-cas-agent/agent.config) in order to override the config packaged with the agent. Agent entrypoint parameters point to this file and will read configuration from that location.

 

Once the agent image has been created, we can run the container using:

 

docker run -d --mount type=bind,source=/opt/install/cas/cas_agent_prd1.config,target=/opt/ci360-cas-agent/agent.config --name ci360-cas-prd1 ci360-cas-agent:latest

 

 

Mount option in the command line above is used to map a file on the host filesystem (/opt/install/cas/cas_agent_prd1.config) to the agent.config file within the container. We are also naming this container.

 

Since our goal is to run multiple instances, we could provide multiple different config files (if configuration between them is actually different) and container specific names. For example:

 

docker run -d --mount type=bind,source=/opt/install/cas/cas_agent_prd2.config,target=/opt/ci360-cas-agent/agent.config --name ci360-cas-prd2 ci360-cas-agent:latest

docker run -d --mount type=bind,source=/opt/install/cas/cas_agent_demo1.config,target=/opt/ci360-cas-agent/agent.config --name ci360-cas-demo1 ci360-cas-agent:latest

 

 

We can see that we are launching two new containers, named ci360-cas-prd2 and ci360-cas-demo1, with two local files cas_agent_prd2.config and cas_agent_demo1.config as their configuration files respectively.

 

multiple_containers.jpg

 

Of course, we can also run multiple containers that use the same configuration file, where
multiple instances simply provide horizontal scaling and failover capabilities. This will be a
common scenario for a single agent with always on and high-volume requirements.

 

multiple_containers_single_tenant.jpg

 

Deployment Steps for Kubernetes
In order to deploy the agent to Kubernetes environment, we will assume that agent image has
been built, it can be deployed and executed standalone in a Docker environment (e.g., desktop),
and it has been pushed to a container registry.

 

Create Kubernetes Artifacts
Manifest YAML file needs to be created for deployment to Kubernetes. A sample YAML file
(ci360-cas-agent.yaml) is provided in its entirety below, and can also be found in SAS CI GitLab
repository:

https://github.com/sassoftware/ci360-extensions/blob/main/code/ci360-cas-agent/k8/ci360-cas-agent.ya... 

 

Take a copy of ci360-cas-agent.yaml manifest from the k8 directory of the agent as this will be
used to create the K8s artifacts. The YAML file contain definitions for creating a PersistentVolumeClaim, ConfigMap, Deployment
and a HorizontalPodAutoscaler. Part of the YAML file is configuration map, which in our example contains CI360 connection
information including CI360 gateway URL and credentials, as well as CAS details.


You will generally need to edit this YAML file to provide details of your ci360.gatewayHost,
ci360.tenantID, ci360.clientSecret, cas.username and cas.password which can be found in the
ConfigMap section:

 

ci360.gatewayHost=extapigwservice-demo.cidemo.sas.com
ci360.tenantID=0bf01xxxxxxxa3a2cacaa
ci360.clientSecret=MTAxOTIxxxxxxxxxxxxxxxxxxxN2ZpaWpp
cas.username=sasdemo
cas.password=xxxxxxxx

 

 

In this example, you will also need to update the Deployment section to provided details of your
“containers - image” and “imagePullSecrets - name” (if deployed to a private container registry):

 

imagePullSecrets:
- name: aws-pull-v2
containers:
- image: xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/ci360-cas-agent:latest

You should now be ready to apply this manifest your K8s cluster:

 

 

kubectl apply -f ./ci360-cas-agent.yaml –namespace <YOUR NAMESPACE>

 

 

To facilitate mapping of config file in the YAML file, and to show an example where change of Dockerfile is needed, we have slightly modified the ENTRYPOINT line in the Dockerfile before building and pushing the agent image:

 

Old:

ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-DconfigFile=agent.config", "-Djavax.net.ssl.trustStore=viya4_trustedcerts.jks", "-Xms32m", "-Xmx2048m", "-jar", "ci360-cas-agent-21.09.1.jar"]

 

New:

RYPOINT ["java", "-Dlogback.configurationFile=/ci360-cas-agent-config/logback.xml", "-DconfigFile=/ci360-cas-agent-config/agent.config", "-Djavax.net.ssl.trustStore=viya4_trustedcerts.jks", "-Xms32m", "-Xmx2048m", "-jar", "ci360-cas-agent-21.09.1.jar"]

All we are doing is changing the location within the image that agent will use to read logging
configuration and agent configuration files, and we will use those same locations within our
YAML file.

 

 

 

 

 

How to improve email deliverability

SAS' Peter Ansbacher shows you how to use the dashboard in SAS Customer Intelligence 360 for better results.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 0 replies
  • 332 views
  • 1 like
  • 1 in conversation