BookmarkSubscribeRSS Feed

Glass Box Neural Networks

Started ‎06-10-2022 by
Modified ‎06-10-2022 by
Views 1,567

Neural network models are typically described as “black boxes” because their inner workings are not easy to understand. We propose that, since a neural network model that accurately predicts its target variable is a good representation of the training data, the output of the model may be recast as a target variable and subjected to standard regression algorithms to “explain” it as a response variable. Thus, the “black box” of the internal mechanism is transformed into a “glass box” that facilitates understanding of the underlying model.

Comments

Hello,

 

I often use that technique as well. Works good !

Since VIYA, I also use :

  • Partial Dependence Plots (PD-plots)
  • Individual Conditional Expectation Plots (ICE-plots)
  • Local Interpretable Model-agnostic Explanations (LIME) visualisations
  • Shapley values (shapleyExplainer action)

Much info on the above can be found when Googling ( include SAS key-word when you are searching !! )

 

Koen

Version history
Last update:
‎06-10-2022 03:26 PM
Updated by:
Contributors

hackathon24-white-horiz.png

The 2025 SAS Hackathon Kicks Off on June 11!

Watch the live Hackathon Kickoff to get all the essential information about the SAS Hackathon—including how to join, how to participate, and expert tips for success.

YouTube LinkedIn

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags