BookmarkSubscribeRSS Feed

Glass Box Neural Networks

Started ‎06-10-2022 by
Modified ‎06-10-2022 by
Views 720

Neural network models are typically described as “black boxes” because their inner workings are not easy to understand. We propose that, since a neural network model that accurately predicts its target variable is a good representation of the training data, the output of the model may be recast as a target variable and subjected to standard regression algorithms to “explain” it as a response variable. Thus, the “black box” of the internal mechanism is transformed into a “glass box” that facilitates understanding of the underlying model.

Comments

Hello,

 

I often use that technique as well. Works good !

Since VIYA, I also use :

  • Partial Dependence Plots (PD-plots)
  • Individual Conditional Expectation Plots (ICE-plots)
  • Local Interpretable Model-agnostic Explanations (LIME) visualisations
  • Shapley values (shapleyExplainer action)

Much info on the above can be found when Googling ( include SAS key-word when you are searching !! )

 

Koen

Version history
Last update:
‎06-10-2022 03:26 PM
Updated by:
Contributors

SAS Innovate 2025: Call for Content

Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!

Submit your idea!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags