BookmarkSubscribeRSS Feed

Image classification, model training and streaming data using Rubik's cubes

Started ‎10-09-2020 by
Modified ‎10-09-2020 by
Views 3,589

Imagine you are responsible for manufacturing quality at your company. Wouldn't it be great to know what percentage of product contains flaws? Maybe replace human oversight with automation or collect data to perform a root cause analysis. SAS Analytics for IoT is a perfect solution to solve this or any Computer Vision (CV) problem. This comprehensive AI-embedded solution provides so many capabilities because it integrates a number of key SAS products (see the figure below).

 

SAS Analytics for IoTSAS Analytics for IoT

Here we are going to simulate our manufacturing process using a Rubik's cube. Our cube represents a part which is being produced, and it is up to us to determine whether it was manufactured correctly.  This will be accomplished using an image classification computer vision model.  This article will cover the following topics: 

  • Gathering and classifying training data
  • Train CV models using the SAS Deep Learning Python (DLPY) package
  • Operationalize our models using streaming analytics and ESPPy

Please refer to the github repository iot-image-classification-rubiks-cubes for more information and examples. 

 

Before we get started please take a look at look at the following video which gives an overview of the process: 

 

First off, let's talk Computer Vision

CV techniques provide the ability to acquire, process and analyze incoming images. This analysis produces numerical results in the form of predictions based on the classes we define. In this example, we need to create one CV model which will be used to tell if a Rubik's cube is solved or unsolved. We will also add in a third class which will represent any image where the cube isn't properly presented to the camera. We don't want to score images where there is no cube present or the camera can only see one side of the cube. Having an invalid case makes the predictions more accurate. Since we are using image classification without object detection our cubes must be correctly positioned within the camera frame in order to get accurate results. Here is an example of a correctly positioned cube:

Looking GoodLooking Good

Ideally, three sides of the cube should be visible to the camera and you would need two cameras to know for sure that the cube is correctly manufactured. However, for this example I am considering two visible sides valid, and simply holding the cube in my hand as I wait for the image to be scored.

 

Creating a training dataset

The first step in the process is creating an awesome training dataset. When building your dataset remember these axioms:

  • A Computer Vision model is only as good as the dataset it was trained on
  • If you can't see it in the picture, the computer won't be able to tell either
  • If possible collect images from the production environment

It is recommended that no less than 1000 images be used for each class you intend to train and more images are always better. These images are also subdivided into two datasets. The first is for training and the second is a testing dataset. Therefore, an additional 200 images for testing is also recommended. Image size should be no less than 224 by 224 pixels.  You may need to add your own images to the training dataset. For example, I was told that Rubik's cubes in Brazil have different colors.  For my training, I collected 1000 images of the 3 classes, good, bad, and empty.  Each class of images needs to be separated into its own sub directory.  

 

Next, we train our models

Before we build an application that uses analytical models to predict outcomes, we need to train them. The training process for CV involves classifying images and separating these images into datasets that can then be fed into a machine learning model such as ResNet50, VGG16, or Darknet. This stage of the process is completed using the SAS Deep Learning Python (DLPy) package which provides the high-level Python APIs to the deep learning methods in SAS Visual Data Mining and Machine Learning (VDMML).

Training FlowTraining Flow

 

The previous diagram illustrates the flow for training a deep learning analytical model. Starting from the left we see that a Jupyter notebook is used to issue DLPy and SWAT commands which communicate with SAS Viya. For large jobs Viya has the ability to farm processing out to workers and take advantage of parallel processing. GPU support is also available. Deep learning algorithms are then invoked as each image is processed to create a portable analytics file called an .astore file. A typical training exercise contains these steps:

  • Setup libraries and launch CAS
  • Load and explore the training data
  • Prepare the data for modeling
  • Specify the model architecture, configure model parameters and import pre-trained weights
  • Fit the image detection and classification model
  • Evaluate the newly created image classification model
  • Visualize model results
  • Save model as .astore for deployment

For more information about the DLPy high-level Python APIs which allows you to build deep learning models please see the following examples:

Now use streaming analytics

Streaming analytics is defined as the ability to constantly calculate statistical analytics on an incoming stream of data. In our case, that stream of data is the images coming from the camera. SAS Event Stream Processing (ESP), which is part of the SAS Analytics for IoT solution, provides the ability to deploy our newly trained analytical model, in the form of an .astore file, at the edge. With ESP you can ingest, filter and transform your data in stream.

Streaming FlowStreaming Flow

 

Here we can see the flow of information through the system and highlight some key points:

  • ESPPy adds flexibility In this diagram we are showing the flexibility of using ESPPy to remotely connect to ESP from a camera which is not directly connected to the ESP server. Here we are using OpenCV to collect and transform the image before it is sent to ESP for processing.
  • ESP is cloud enabled Depending on your production environment, ESP can be placed at the edge or in the cloud.
  • ESP Includes a powerful graphical development platform. Using ESP Studio models such as these may be created without any coding.
  • ESP operationalizes analytics Expertly trained models are now available anywhere, assembly line, factory floor, anywhere CV is needed.

As seen here, the ESP model which creates our detector is quite simple.  All that is needed is a source window to inject images, 2 windows to load the CV model and a scoring window.   Once the model is loaded and running a Jupyter notebook can be used to visualize the outputs from the imageScoring ESP window.  

ESP ProjectESP Project

Using ESPPy with Jupyter notebook

Using a Jupyter notebook we can use ESPPy to display ESP project information and subscribe to the imageScoring window to see the results. Let's take a look at the ESPPyRubiksControl Jupyter notebook which is located in the streaming directory. Using this notebook you can,

  • Connect to ESP Server
  • Load Project
  • Subscribe to Event Streams
  • Visualize Data from Streams

After the notebook is running in Jupyter connecting to an ESP server is a simple one line command. Once you have a connection to the server, you can query it for information about projects (running and stopped) and the server itself. You can also create, load, start, stop, and delete projects. Projects can be loaded from file paths, Python file objects, or URLs. Using the project.to_graph(schema=True) we can graphically see the currently active project.

Project to GraphProject to Graph

Jupyter notebook and ESPPy can be used to show streaming statistics. For example, here is a bar which shows the recent Rubik's cube results:

Streaming ResultsStreaming Results

The combination of ESP, ESPPy, Python and Jupyter notebook is quite powerful.  As you can see, SAS Analytics for IoT provides you with all the tools you’ll need to quickly go from concept to production. Although this example consumed image data, you can use any type of data. This comprehensive solution also provides tools to maintain and govern model development, as well as everything you need to visualize and cleanse new data sources. Let’s see where your imagination takes you! I’d love to hear how you put SAS Analytics for IoT to work for your company.

Version history
Last update:
‎10-09-2020 02:34 PM
Updated by:
Contributors

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags