BookmarkSubscribeRSS Feed

Glass Box Neural Networks

Started ‎06-10-2022 by
Modified ‎06-10-2022 by
Views 376

Neural network models are typically described as “black boxes” because their inner workings are not easy to understand. We propose that, since a neural network model that accurately predicts its target variable is a good representation of the training data, the output of the model may be recast as a target variable and subjected to standard regression algorithms to “explain” it as a response variable. Thus, the “black box” of the internal mechanism is transformed into a “glass box” that facilitates understanding of the underlying model.

Comments

Hello,

 

I often use that technique as well. Works good !

Since VIYA, I also use :

  • Partial Dependence Plots (PD-plots)
  • Individual Conditional Expectation Plots (ICE-plots)
  • Local Interpretable Model-agnostic Explanations (LIME) visualisations
  • Shapley values (shapleyExplainer action)

Much info on the above can be found when Googling ( include SAS key-word when you are searching !! )

 

Koen

Version history
Last update:
‎06-10-2022 03:26 PM
Updated by:
Contributors

SAS INNOVATE 2024

innovate-wordmarks-white-horiz.png

SAS is headed back to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team.

Interested in speaking? Content from our attendees is one of the reasons that makes SAS Innovate such a special event!

Submit your idea!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags