The universe of customer experiences, digital analytics, personalization and decisioning is massive. At times, it can seem as complicated and vast as the galaxy itself. With intricate subjects underneath this umbrella, you can lose direction, wander aimlessly, or feel a misleading sense of success or failure. When you lose vision, your appetite for predictive clarity increases.
From tactical to strategic, there are categories of analysis that range from foundational to advanced for every brand. Consider the unique altitudes of dependency for various metrics and key performance indicators by organizational role:
Analysts and managers:
Vice presidents and c-suite executives
Altitude defines the scope and significance of decisions. It also influences the frequency at which data is received, and the associated depth of insights. For example, managers are typically required to make tactical decisions – impacting say tens of thousands of dollars. At the other end of the spectrum, vice presidents shoulder the responsibility for making wide sweeping strategic decisions – impacting tens of millions of dollars.
When offered delicious data, everyone wants more.
In my encounters with marketing professionals across industries, I frequently hear that more data can lead to better results. Just take a look at the martech space in the last five years. The rise of DMPs and CDPs have been hard to ignore, but somehow topics like predictive analytics and decision management are optional.
Wow! Insert confused emoji image here please.
If more data simply equaled smarter decisions, there would be peace on earth. According to SAS COO and CTO Oliver Schabenberger:
“Data without analytics is value not yet realized.”
We can do so much better. A core part of our job, as individual contributors or leaders within the analytics practice, is to ensure that prescriptive insights reaches the right individual at the appropriate time. Machine learning and artificial intelligence represent all the rage right now. For the metrics listed above, there could be a model to help for each one. I recently wrote about how users of SAS Customer Intelligence 360 can efficiently perform end-to-end machine learning projects. For readers seeking a primer, please check it out.
What happens when a brand develops an inventory of models ready for action?
Today’s machine learning techniques allow analysts to quickly train and create more models faster than ever. As efficiency increases, authoring models is only one aspect of the analytical lifecycle that brands need to consider. As the number of models increases to support more business objectives, so does the requirement to manage these assets as valuable competitive differentiators.
Model management is not a one-time activity, but an essential business process. Models must be well developed and validated to demonstrate that they are working as expected. Outcome analysis is also necessary to:
Other aspects include cataloging and tracking this growing inventory of analytical assets, while providing support for the governance of these models using version control through repeatable and traceable workflows.
Here’s an example using multiple models constructed to address propensity targeting on sas.com. Leveraging structured views of web visitor data captured by SAS Customer Intelligence 360, users can leverage model studio within SAS Visual Data Mining and Machine Learning to construct machine learning pipelines.
In the figure above, the pipeline project is using seven different algorithmic approaches to identify the approach that will maximize scoring precision and minimize error:
In other words, predictive accuracy. For those of you interested in technical documentation of these modeling procedures, please go here.
Figure 4 highlights the menu of options available for selection criteria of the champion model. In this example, you would select the misclassification event rate because you want to maximize the accuracy to predict who is likely to convert to help your marketing team achieve higher returns.
This results in an auto-tuned gradient boosting model outperforming all other challengers. The pipeline comparison dashboard shown below provides a deep set of model interpretability visualizations, diagnostics and scoring logic.
Demonstrating how brands manage actionable analytical assets
Now that you have an analysis selected worthy of addressing your marketing team’s business problem, you can begin managing the models for deployment. Options include:
Users can store models in a common model repository and organize them within projects and folders. A project consists of the models, attributes, tests and other resources that you use to:
All model development and maintenance personnel, including data modelers, validation testers, scoring officers and analysts, can use and benefit from these features. We begin by opening the gradient boosting model in Figure 7 to highlight how users can add, review and customize the model’s input and output variables.
The project’s metadata includes information such as the name of the project, the type of model function (classification, clustering, prediction, forecasting, etc.), the project owner, the project location and the variables that are used by project processes. Want to deploy model scoring code in SAS, CAS, R, Python or another language within the model’s properties?
Figure 8 below highlights your options.
Model validation processes can vary over time. One thing that is consistent is that every step of the validation process needs to be logged. For example:
As displayed in Figure 9, users can assess and compare across models. When comparing models, the model comparison output includes model properties, user-defined properties, variables, fit statistics, and plots for the models.
Before a new champion model is published for production deployment, an organization might want to test the model for operational errors. This type of pre-deployment check is important especially when the model is to be deployed in real-time scoring use cases. The purpose of a test is to run the score code of a model and produce results that can be used for scoring accuracy and performance analysis.
Figure 10 showcases a snapshot of the champion model’s scoring logic successfully assigning propensities to sas.com web visitors, as well as predicted classifications of yes/no on whether the prospect is likely to meet the defined conversion goal.
When a champion model is ready for production scoring, users set the model as the champion. The project version that contains the champion model becomes the champion version for the project. Users can leverage challenger models to test the strength of champion models over time.
To ensure that a champion model in a production environment is performing efficiently, users can collect performance data that has been created by the model at intervals that are determined by your brand. Performance data is used to assess model prediction accuracy. For example, users might want to assess performance weekly, monthly or quarterly. Monitoring can be performed on champion and challenger models, and as data trends change over time, the champion model can be improved by:
Users can publish models, so that it can be used by other applications for tasks such as predictive scoring. Models can be published to destinations that are defined for CAS, Hadoop, SAS Micro Analytic Service, and defined databases.
Specific to analytically-charged digital marketing, the SAS Micro Analytic Service is a powerful mechanism. For example, it can be called as a web application with a REST interface by SAS and other client applications. Envision a scenario where a visitor clicks on your website or mobile app, meets an event definition, and a machine learning model runs to provide a fresh propensity score to personalize the next page of that digital experience. The REST interface (known as the SAS micro analytic score service) provides easy integration with client applications, and adds persistence and clustering for scalability and high availability. For more technical details on using the SAS micro analytic score service, check this out.
To bring this to life, let’s demonstrate how it works.
The figure above provides a non-technical method to show how SAS Customer Intelligence 360 can call the champion gradient boosting model to run. Input parameter values can be inserted to simulate different scenarios. For this example, I will provide values for:
These values represent visitor behaviors to sas.com that can be used for scoring.
Figure 13 shares what occurs after the model is called with those specific input values. For a visitor to sas.com with those specific data points, the model predicts this visitor will convert with a probability of 83 percent. As you can see, I can run other simulations to assess how other visitors will behave, as well as confirm that my model can successfully produce the actionable scoring when called.
SAS Customer Intelligence 360 enables brands to use first-party data to make better decisions using predictive analytics and machine learning in conjunction with business rules across a hub of channel touch points. As your journey into analytical marketing use cases progresses, usage of your modeling intellectual property cannot be under-exploited. It’s competitive differentiation awaiting to be deployed.
Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.
If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website.
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.