The vision: A SAS app to enhance your safari park experience
SAS and open source can help you detect and learn about animals
Imagine driving around a safari park. You find a beautiful, interesting animal that you would like to know more about. Where does it come from? What does it do? What is it, even? Animals are walking around freely, zoo information signs are nowhere to be found, and since you are in your own car, there is no ranger to ask.
The solution is an application that we call the SAS AI Ranger. Created during the AI Hackathon, an internal competition held during the SAS Global Customer Advisory Academy, it allows you to take a picture of an animal and learn about it on the go. Our app allows you to enjoy safari animals that are walking around freely without missing an opportunity to learn. On top of that, it provides the safari park with valuable information on which animals their customers like and want to learn more about. This will help them improve their content for the customers.
Through the SAS Global Customer Advisory Academy, young professionals get to spend 10 weeks at SAS headquarters in Cary, NC, to learn about the latest SAS innovations from the company's brightest minds. During one of the weeks, a hackathon introduced to us the breadth and depth of the leading capabilities that SAS offers in the field of artificial intelligence. After three days of classes, it was time for us associates to get our hands dirty and start 1.5 days of hacking in groups of 4/5 people. The result had to be a full-fledged demonstration of an end-to-end AI application showing how the power of SAS can create value. Our result: the SAS AI Ranger. And in the next section, we’ll show you how it works.
A computer vision use case
Optimization, natural language processing, computer vision, anomaly detection, network analysis, recommendation engines. All these AI use cases can be achieved using SAS capabilities. During the hackathon, our team chose to develop a computer vision application powered by both SAS and open source technologies.
Since we had very limited time to build an end-to-end app, we parallelized our work in two ways. On the one hand, we trained a YOLO object detection model via transfer learning inside SAS Cloud Analytic Services (CAS). We used the SAS open source libraries DLPy and SWAT written in Python for this. On the other hand, we simultaneously developed the front and back ends of a web app. Using HTML5, CSS and Bootstrap, we built a responsive website to let users upload their own images and detect animals in them. Regarding back-end technologies, the web server we developed was written in Python using Flask framework.
Once the model was trained and the web app built, it was time to deploy and start scoring the model. To achieve this, we exported the model as an astore file, and we deployed it inside the web server. Then, we connected our Flask web app to CAS, once again using SWAT, to perform instantaneous scoring of images! Now, it’s all set: SAS AI Ranger is detecting animals and helping users acquire insights about them.
SAS AI Ranger detects different animals and shows you the interaction between them
From a business perspective, SAS AI Ranger is not only capable of educating children (and adults) on wildlife. Enhancing the customer experience in a drive-through safari may also increase customer satisfaction by enabling a personalized visit and may lead to a deeper understanding of what customers like. At the bottom line, this adds value to safaris if parks capitalize on these customer insights. Now, we’ll move to a different point of view and show part of our code.
How we coded the SAS AI Ranger
In this section we describe how SAS AI Ranger looks under the hood. To train our model, we are using pictures from the COCO-Dataset. First, we need to create a training set of pictures that are relevant for our use case. We also need to create a file which describes, with the help of bounding boxes, where the objects in the pictures are.
After we created a set of pictures that we want to use, we use a Jupyter Notebook with DLPy and SWAT to train the model on SAS Viya.
First, we import the necessary libraries. Then, we open a session to connect to SAS Viya.
# Import packages
from dlpy import *
from swat import *
import numpy as np
import pandas as pd
# Establish connection to CAS server
cas_host = 'localhost'
cas_port = 5570
cas_username = 'username'
cas_password = 'password'
cas_connection = swat.CAS(cas_host, cas_port, cas_username, cas_password)
# Load action sets
cas_connection.loadactionset('astore')
cas_connection.loadactionset('image')
cas_connection.loadactionset('deepLearn')
After everything is set up, we use DLPy to train and deploy the model.
# Train the YOLO model using Transfer Learning
cas_connection.table.addcaslib(activeonadd = False,
datasource = {'srctype': 'path'},
name = 'caslib_with_training_data',
path = 'path_to_caslib',
subdirectories = True)
cas_connection.table.loadtable(casout = {'name': 'tiny_yolo_v2', 'replace': True},
caslib = 'caslib_with_training_data',
path = 'caslib_name.sashdat')
yolo_model = Tiny_YoloV2(cas_connection,
n_classes=3,
grid_number= 13,
height=416, width=416,
predictions_per_grid=5,
anchors = yolo_anchors,
max_boxes=15,
coord_type='yolo',
max_label_per_image = 15,
class_scale=1.0,
coord_scale=1.0,
prediction_not_a_object_scale=1,
object_scale=5,
detection_threshold=0.2,
iou_threshold=0.05)
yolo_model.fit(data=tbl_new,
optimizer=optimizer,
data_specs=data_specs,
n_threads=1,
record_seed = 13309,
force_equal_padding=True,
gpu = gpu
)
# Model deployment
yolo_model.deploy(path='path_to_deploy', output_format='astore')
Now we can use the deployed model in our web app as shown before!
We won! And we learned a ton
After presenting our solution to the judges and our fellow associates, feedback was very positive—so positive that we were awarded first place in the hackathon! Honored as we were to have won the prize, we think the most valuable thing we got from this hackathon was the experience. The experience of working with a nicely integrated and easy-to-use open source and SAS framework, leveraging the power of SAS’ execution engine, while working together in a multidisciplinary team and applying our knowledge to a SAS approach that always has value for the customer in mind. All in all, it was a very insightful week with a thought-provoking application of AI.
Also, it was a great chance to discover one of the many useful applications of computer vision. This time, a very specific use case of this technology has been presented, but computer vision could be used in other major industries, such as energy production, for example, to recognize which factors need to be adjusted in order to increase the efficiency of solar farms. The possibilities are countless, so we encourage you to keep exploring!
LEARN MORE
Now that you have finished reading this article, go ahead and read more about how SAS used computer vision to help boost clean energy production as one of many #Data4Good projects.
Authors
Collaborators on the SAS AI Ranger and this article were: @AntonieBerkel, @BusraOzaydin, @IvanGutierrez, @JohannesPretsch and @GonzaloUlla
... View more