BookmarkSubscribeRSS Feed

Glass Box Neural Networks

Started ‎06-10-2022 by
Modified ‎06-10-2022 by
Views 1,670

Neural network models are typically described as “black boxes” because their inner workings are not easy to understand. We propose that, since a neural network model that accurately predicts its target variable is a good representation of the training data, the output of the model may be recast as a target variable and subjected to standard regression algorithms to “explain” it as a response variable. Thus, the “black box” of the internal mechanism is transformed into a “glass box” that facilitates understanding of the underlying model.

Comments

Hello,

 

I often use that technique as well. Works good !

Since VIYA, I also use :

  • Partial Dependence Plots (PD-plots)
  • Individual Conditional Expectation Plots (ICE-plots)
  • Local Interpretable Model-agnostic Explanations (LIME) visualisations
  • Shapley values (shapleyExplainer action)

Much info on the above can be found when Googling ( include SAS key-word when you are searching !! )

 

Koen

Contributors
Version history
Last update:
‎06-10-2022 03:26 PM
Updated by:

hackathon24-white-horiz.png

The 2025 SAS Hackathon has begun!

It's finally time to hack! Remember to visit the SAS Hacker's Hub regularly for news and updates.

Latest Updates

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags