BookmarkSubscribeRSS Feed

Machine Learning, Automated Visualization & Interpretability with SAS Customer Intelligence 360

Started ‎09-30-2020 by
Modified ‎10-29-2020 by
Views 3,126

As machine learning takes its place in numerous advances within the marketing ecosystem, the interpretability of these modernized algorithmic approaches grows in importance. Emerging machine learning applications for business-to-consumer (B2C) use cases range from:

 

  • Customer journey optimization
  • Acquisition marketing (or lead scoring)
  • Upsell and cross-sell propensity models
  • Pricing optimization
  • Traffic and demand forecasting
  • Retention (or decreasing churn)
  • Ad targeting

 

These uses should sound familiar to any data-driven marketer. However, machine learning grabs the baton from classical statistical analysis by increasing accuracy, context and precision. A wide variety of business problems can incrementally benefit from algorithms like forests, gradient boosting or support vector machines. When it comes to influencing stakeholders, marketing analysts often put emphasis on the prediction accuracy of their models – not on understanding how those predictions are actually made.

 

For example, do you really care why individuals click on a display media ad? As long as you get more clicks, some might be satisfied because key performance indicators are trending positively. Give me the algorithm that maximizes media performance and spare me the details. Black box, white box, it doesn’t matter. I got things to do.

 

However, others genuinely care about both analytical precision and explanatory insights that reveal why some tactics work better than others. If you have a conversion goal on your website, then identifying individuals who have higher propensities to meet that objective is part of the recipe, but understanding the drivers of that behavior could inform:

 

  • Look-a-like segmentation to acquire higher quality customers
  • Testing strategies for optimizing customer interactions
  • Customer journey analytics and attribution measurement for conversion goal insights

 

Interpretability of machine learning models is a multifaceted and evolving topic. Some applications are easy to understand, commonly referred to as white box (transparent) models. They provide us the opportunity to explain a model’s mechanisms and predictions in understandable terms. In other words, we are removing the unanswerable question of “why this” or “why that” from the conversation.

 

 

Decisions.jpg

Figure 1: Decisions Made By Machines

 

Imagine a scenario where analysts can tell a data story about how changing the strategic levers (inputs) will affect the predicted outcome, as well as provide the justifications. It’s a beautiful outcome when technical and non-technical audiences can walk away with a clear understanding of a refinement in marketing strategy at the end of a meeting.

 

However, with the recent advances in machine learning and artificial intelligence, models have become very complex, including deep neural networks or ensembles of different models. We refer to these specific examples as black box models. Unfortunately, the complexity that gives extraordinary predictive abilities also makes black box models challenging to understand and trust. They generally don’t provide a clear explanation of why they made a certain prediction. They give us a probability that is actionable, yet hard to determine how we arrived at that score.

 

Brands experimenting with machine learning are questioning whether they can trust the models, and if fair decisions can be made using them.

 

If an analyst cannot figure out what they learned from those data sets, and which of those data points have more influence on the outcome than the others, how can they tell a practical story to the broader business, and recommend taking action? I don’t know the sort of presentations you give, but if I’m encouraging a senior leader to alter their direction, I want them to be able to explain why specific outcomes end positively or negatively to their leadership team. Shrugging one’s shoulder and saying “I don’t know why we made or lost an additional $5 million dollars” just feels dangerous.

 

What happens if the algorithm learns the wrong thing? What happens if they are not ready for deployment within your channel touchpoint technology? There is a risk of misrepresentation, oversimplification or overfitting. That’s why you need to be careful when using them, or the promise of consumer hyper-personalization may never be fulfilled.

 

SAS’s vision is to help marketers be effective through analytic techniques. Consumer preferences are hard to predict. By using SAS’s deep library of algorithms within SAS Customer Intelligence 360, machine learning can be embraced, rather than resisted, to create relevancy through data-driven personalization.

 

With that said, I invite you to view a video and technology demonstration that will address the following topics:

 

  1. What is SAS Visual Data Mining & Machine Learning?
  2. What type of features are available to users to share insights, as well as model building assumptions and methodologies?
  3. How does SAS help users explain model output to business users?

 

 

Learn more about how the SAS platform can be applied for marketing data management here.

Version history
Last update:
‎10-29-2020 11:32 AM
Updated by:
Contributors

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags