BookmarkSubscribeRSS Feed

Glass Box Neural Networks

Started ‎06-10-2022 by
Modified ‎06-10-2022 by
Views 1,760

Neural network models are typically described as “black boxes” because their inner workings are not easy to understand. We propose that, since a neural network model that accurately predicts its target variable is a good representation of the training data, the output of the model may be recast as a target variable and subjected to standard regression algorithms to “explain” it as a response variable. Thus, the “black box” of the internal mechanism is transformed into a “glass box” that facilitates understanding of the underlying model.

Comments

Hello,

 

I often use that technique as well. Works good !

Since VIYA, I also use :

  • Partial Dependence Plots (PD-plots)
  • Individual Conditional Expectation Plots (ICE-plots)
  • Local Interpretable Model-agnostic Explanations (LIME) visualisations
  • Shapley values (shapleyExplainer action)

Much info on the above can be found when Googling ( include SAS key-word when you are searching !! )

 

Koen

Contributors
Version history
Last update:
‎06-10-2022 03:26 PM
Updated by:

sas-innovate-2026-white.png



April 27 – 30 | Gaylord Texan | Grapevine, Texas

Registration is open

Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and lock in 2025 pricing—just $495!

Register now

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags