When creating a Decision Flow in Intelligent Decisioning, you can test the decision logic within Intelligent Decisioning to make sure the decision works as desired. You can also publish the Decision Flow to MAS and validate its behaviour running in MAS. But it is useful and often necessary to get an understanding how MAS can cope with the Decision Flow in production. What happens when many users call the Decision Flow at the same time? Can MAS cope with it, especially if MAS is serving several decisions, rule sets or analytical models at the same time. You need to be able to run a load test and measure the performance to see what impact the Decision Flow has on your MAS environment. Load testing allows you to predict how a Decision Flow will perform before it is moved into production.
There are many load-testing tools out there. We are going to look today at Locust. It is open source, Python based and easy to use.
The idea of Locust is that a swarm of locusts (users) will attack your webservices. The user (locust) behaviour is defined in a Python script and the attack is monitored in real-time via a web UI. This allows you to identify bottlenecks and helps to configure the environment appropriately.
Setup Locust
Locust runs on Linux and Windows and its installation is straightforward. The current version requires Python 3.6 or later. Once we have installed Python, we can install Locust by running:
pip3 install locust
If we use Anaconda Python, we can install Locust running:
conda install -c conda-forge locust
When we have installed Locust, we can check that the installation works by running:
locust --help
This will give us an output that look like this:
Select any image to see a larger version. Mobile users: To view the images, select the "Full" version at the bottom of the page.
Load Test
When we have successfully installed Locust, we can run load tests on Decision Flows (or any other module) in MAS. The MAS environment, where we do the load test, needs to have the same setup (or at least very similar) as the production environment. Otherwise the test results won’t be very meaningful Assuming we have created a Decision Flow, have tested it to ensure the logic works as desired and we have published the Decision Flow to MAS where we did a test run to make sure it will execute in the published environment. We can then prepare a simple Python script to run a load test on the Decision Flow.
Locust Script
For our test we are going to use Users and Tasks in Locust:
A User is a virtual user that is performing one or more Tasks.
A Task is a work unit a user is doing. In our case calling a Decision Flow.
For the test scenario we are going to write a test script where a user logs on to Viya to get an access_token and then calls a Decision Flow repeatedly. For this, we write a locust file, which is a regular Python script, to setup the test scenario. From locust we import HttpUser, task and between as these are needed in the test script:
from locust import HttpUser, task, between
To define the user, we write a class that inherits class HttpUser, which allows us to make web calls.
class MAS_User(HttpUser):
Within the class MAS_User we define the work the user is doing as functions.
At first the user needs to get an access_token from Viya. As we only need to do this once, we use the default function on_start() that comes with class HttpUser. The function on_start() is called when a user is created and before any Task is executed. Within on_start() we call the Viya web services to receive the access_token which is then used to authorize when calling the Decision Flow.
def on_start(self):
To create a Task for a user we create a Python function within the class MAS_User and give it a meaningful name. To make the function a Task, we add the @task decorator on top of the function.
@task(1)
def approve_contract(self):
@task takes an optional weight argument that can be used to specify the task’s execution ratio. For example, if we have two tasks, @task(3) and @task(6), then @task(6) will have twice the chance of being called as @task(3).
To call a web service from class MAS_User we use appropriate functions from its parent class HttpUser. HttpUser has a set of functions to perform web requests. This set of functions is a slightly extended version of python-request’s requests.Session class and mostly works exactly the same.
To call a Decision Flow, we use the this code:
url= "/microanalyticScore/modules/approve_contract/steps/execute"
response= self.client.post(url, headers=self.headers, data= payload, name=mas_module)
We can also set wait_time on class MAS_User to make the user wait between a minimum time and a maximum time after each task is executed. To assign a time interval to wait_time we use the Locust function between(). Below we set a time interval to wait between 1.5 and 3.5 seconds after each task.
wait_time= between(1.5, 3.5)
When we have written the test script, we save it under the name locustfile.py, as this is the default name for a script Locust is looking for. We can use a different name but would have to tell Locust when starting the locust server.
Finally, the locustfile.py file could look like this for calling a Decision Flow in MAS:
import base64
import json
from locust import HttpUser, task, tag, between
user= 'sasdemo'
password= '********'
clientId= 'mas.client'
clientSecret= '********'
class MAS_User(HttpUser):
headers= {'Authorization': '', 'Content-Type': 'application/json'}
wait_time= between(5.5, 10.0)
def on_start(self):
Authorization= clientId + ':' + clientSecret
Authorization64= base64.b64encode(bytes(Authorization, 'utf-8'))
url = "/SASLogon/oauth/token?grant_type=password&username=%s&password=%s" % (server, user, password)
response= self.client.get(url, headers= {'Content-Type': 'application/json', 'Authorization': 'Basic ' + Authorization64.decode('ascii')}, name= 'Get access token')
self.headers['Authorization']= 'Bearer ' + json.loads(response._content)['access_token']
@task(1)
def approve_contract(self):
mas_module= 'approve_contract'
mas_input={
"inputs": [
{
"name": "contract_id_",
"type": "decimal",
"value": 10007
}
]
}
url= "/microanalyticScore/modules/%s/steps/execute" % (server, mas_module.lower())
payload= json.dumps(mas_input)
response= self.client.post(url, headers=self.headers, data= payload, name=mas_module)
Run Load Test
When we have written and saved locustfile.py, we can run the load test and monitor the results in Locust’s web UI. To start the locust server, we open the Linux command line in the directory where we saved locustfile.py and run locust:
locust
Locust starts and shows some output that it is started.
We then open the Locust Web UI to start the load test. In a browser we call the server where locust is running on port 8089
http://:8089/
A UI is coming up to set the parameters for the load test:
The total number of users that will attack our MAS server.
The spawn rate - the number of new users per second added to the test until the total number is reached.
The IP address where MAS is located. If Locust and MAS are running on the same Linux server, we can use “localhost” or “127.0.0.1”.
By clicking on Start swarming we start the test by launching simulated users to call our Decision Flow. As the web services are called, we can monitor their behaviour in the Locust UI.
On the Statistics tab in the UI we get information like:
Request — Total number of requests
Fails — Number of requests that went wrong
Median — Response time in ms for 50 percentile
90%ile — Response time in ms for 90 percentile
Average — Average response time in ms
Min — Fastest response time in ms
Max — Longest response Time in ms
Average size — Average size in bytes for a response
Current RPS — Current Requests per Second
Current Failure/s — Number of failures per second
This give us an understanding how the web-services are performing.
On tab Charts we get charts on:
Total Requests per Second
Response Time
Number of Users
We can also download the data via tab Download Data.
There are various ways of monitoring the performance of our Decision Flow (and other MAS modules if we have added them to the test scenario in locustfile.py) to identify bottlenecks and to help us configuring our environment appropriately.
Command Line
There is also a command line interface for Locust from where you can configure and start a test and write the output to csv files. To get a list of command line options type for help…
locust -h
… or see Locust documentation. This makes Locust a compelling tool to be used in CI/CD pipelines, which I’m going to talk about in an upcoming blog.
Conclusion
If you have some Python skills, Locust is a convenient tool with a small footprint to run load tests on web services. You have seen a basic example how to run a load test on MAS using Locust. There are, of course, other and more complex ways how to write test scripts. Just have a look at the Locust documentation to see different ways of writing test scripts and to learn more about what you can do with Locust.
... View more