Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Home
- /
- SAS Communities Library
- /
- How to Interpret AI and Machine Learning Models Using Shapley Values i...

Options

- RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content

- Article History
- RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content

Views
913

AI black box models can be highly accurate, but generally lack interpretability. This can be a show stopper in regulated industries such as banking, insurance, health care and others. The lack of interpretability may even prohibit you from using the best models. Enter SAS Viya stage left. SAS Viya provides a number of inordinately helpful interpretability techniques, allowing you to get the most accurate results from complex AI models, while also enabling you to interpret your results.

Out-of-the-box interpretability techniques provided with SAS Viya include variable importance plots and rankings, partial dependency plots, local interpretable model-agnostic explanations (LIME), individual conditional expectation plots, and Shapley values. This post will focus on the Shapley values. For information on other interpretability models available out-of-the-box in SAS Model Studio, see my earlier post.

SHAP (SHapley Additive exPlanation) is a useful machine learning interpretation technique developed in game theory. (SHAP also wins my award for world’s most contrived acronym...I mean really...quite a stretch to get that P in there). The mathematician and game theorist Lloyd Shapley introduced the valuable concept of Shapley values in the 1950s.

Shapley methods provide local interpretability. Recall from my earlier post that global interpretability methods explain results for all of the data, whereas local interpretability techniques explain machine learning results for individual observations (or sometimes groups of observations). Shapley values let you learn how much each input contributes to the model’s prediction for an individual instance. The individual instance may be a loan applicant, a credit card interaction, a patient, a web site interaction, et cetera.

Unlike some of the interpretability methods I’ve described in other blogs (LIME, I'm talking to you), Shapley values are not based directly on a local regression model. Instead, Shapley methods calculate input contributions by averaging across all permutations of the inputs in the model. This helps address potential bias associated with collinearity among inputs.

Calculating exact Shapley values can eat up a lot of compute and memory resources. Because of this, a number of methods for computing approximations to Shapley values have come into vogue. Two of these approximation methods are offered via SAS Viya coding:

- Kernel SHAP (available with the linearExplainer action)
- HyperSHAP (available with the shapleyExplainer action)

Both the linearExplainer action and the shapleyExplainer action are in the explainModel action set.

If you prefer to use the user-friendly GUI SAS Model Studio, Hyper SHAP is the method that's available to you in the current version (SAS Viya 4). If you are using the older version of SAS Model Studio on SAS Viya 3.5, Kernel SHAP is the method available on that version.

A third method, TreeSHAP, is also now available via SAS Viya coding for random forest models and gradient boosting models only. TreeSHAP calculates exact Shapley values. It is not available in SAS Model Studio at this time.

Calculating exact Shapley values is too computationally exhausting to be feasible for most models. The Kernel SHAP method is an approximation technique that is considerably less computationally taxing.

The Kernel SHAP method implementation in SAS Viya is based on the steps presented in Lundberg and Lee (2017). Specifically, the SAS Viya action linearExplainer with preset = “KERNELSHAP” uses the Kernel SHAP method to estimate Shapley values as follows:

- Gathers variable statistics and distribution from the background dataset by modeling each variable separately
- Generates synthetic data based on the distributions of the input data set
- Weights the synthetic data depending on how close they are to the original data; specifically, changes all inputs into binary, with a value of one if the synthetic data row is close to the observation of interest, and zero otherwise
- Scores the synthetic data using the machine learning model you want to explain (e.g., gradient boosting, neural network)
- Runs weighted linear regressions on the synthetic data
- Calculates the coefficients.
- Calculates the baseline/intercept using weighted linear regression that minimizes the weighted squared residual with all the new generated data

See the SAS Documentation for an example.

The HyperSHAP method—a SAS-proprietary Shapley value approximation method—is implemented by the shapleyExplainer action. The good news is that HyperSHAP is more accurate than Kernel SHAP. The bad news is that it can be more greedy for compute power.

HyperSHAP is an approximation method that estimates the conditional expectation values without fitting a regression model. HyperSHAP is more efficient than the Kernel SHAP method because it computes expected differences for only some of the input subset combinations rather than for every combination. And although the intercept value that it calculates may not be the exact average of the original data set, it will likely be closer than the Kernel SHAP method gives you. Another advantage of the HyperSHAP algorithm is that it adjusts the intercept so that all Shapley values and the intercept add up to the final prediction of the query!

The HyperSHAP method is available either via coding or else via the current version of the SAS Model Studio interface. Coders will use the shapleyExplainer action to estimate Shapley values using HyperSHAP. Coders have the added flexibility to adjust the tradeoff between accuracy and computer intensity by adjusting the hyperparameter depth. The hyperparameter depth controls what subsets may be used to approximate Shapley values. Generally you will start out by setting depth = 1. Then try depth = 2, and so on. As you increase the depth hyperparameter you will improve the level of approximation and the more variable interaction information will likely be captured. On the down side, you will increase the computation time and memory use needed to run the algorithm.

If you prefer a user-friendly GUI to coding, you can use SAS Model Studio. In SAS Model Studio on Viya 4, HyperSHAP is the method used behind the scenes, but you can't change the depth. Some users have expressed that they are interested in knowing the intercept value. To determine the intercept, simply subtract all the Shapley values from the actual prediction for the instance to get the intercept. Recall that HyperShap adjusts the intercept so that all the Shapley values and the intercept add up to the predicted value!

When computing the HyperSHAP values, SAS Viya:

- Enables coders to specify the depth of the approximation
- Generates a copy of the training data set where the inputs are set equal to the inputs for the instance under consideration for some of the subsets of the inputs
- Scores the new observations
- Averages the prediction for each data set copy
- Computes a weighted aggregation of the average predictions

Like the other Shapley values, TreeSHAP values enable you to understand the contribution of each input to the prediction for an individual instance. The TreeSHAP algorithm was introduced by Lundberg et al. (2020). This method is less computer intensive than Kernel SHAP or HyperSHAP, but unlike those methods it is not model-agnostic. As the name implies, TreeSHAP works only for tree-based models.

Tree SHAP computes exact Shapley values. Tree SHAP can also be used to extend local interpretations to capture input interactions and interpret global model structure based on local interpretations.

Coders can calculate TreeSHAP values using the current version of SAS Viya for gradient boosting or random forest models with at least two trees and a maximum of two splits per node. This capability became available in Stable version 2023.04 (and LTS 2023.09). TreeSHAP values can be calculated using PROC ASTORE by setting TREESHAP = 1 in the SETOPTION statement. The DESCRIBE statement enables you to see the input and target that each generated TreeSHAP value explains.

Tree SHAP modifies the Shapley computational algorithm that tracks the number of subsets that flow into each node of a tree. A big advantage of the Tree SHAP method is that it reduces the computational complexity of the calculations.

A number of coding examples are provided in the SAS Viya Platform Programming Documentation

In the current version of SAS Model Studio, Hyper SHAP is the method available to you. Once you have built your pipeline in SAS Model Studio, select a model node (e.g., the gradient boosting node).

Open the options pane. Scroll down to **Post-training Properties** and expand **Model Interpretability, Local Interpretability**. Select **HyperSHAP** as shown below.

By default SAS Model Studio will select five local instances (individual observations) at random. Rerun the gradient boosting node and open the results. Under the **Summary** tab, you will now see a **Model Interpretability** button. Select the **Model Interpretability** button and select any of the five instances to see the Shapley values graphed.

In the graph above we are predicting cholesterol levels from weight, smoking, sex, systolic and diastolic blood pressure, and metropolitan relative weight. We see that for Local Instance 1001441, weight increased the prediction for cholesterol by more than 1 unit. Systolic reduced the prediction for cholesterol by 4 units.

If you want to select specific observations, you may specify (type in) up to five at a time as shown below.

**SAS Resources**

- Güneş, Funda, Ricky Tharrington, Ralph Abbey, and Xin Hunt. 2020. How to Explain Your Black-Box Mode...
- SAS Documentation:
- Xin Hunt’s response to Intuitive Way to Interpret Intercept Value of Shapley Values Output

**Original Papers**

- Lundberg, Scott and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions.
- Lundberg, Scott, et al. 2020. From local explanations to global understanding with explainable AI fo...
- Shapley, Lloyd S. 1952. A Value for n-Person Games
- Strumbelj, Erik and Igor Kononenko. 2010. An Efficient Explanation of Individual Classifications usi...

**Shapley Values Illustrated**

**SAS Innovate 2025** is scheduled for May 6-9 in Orlando, FL. Sign up to be **first to learn** about the agenda and registration!

Data Literacy is for **all**, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.

Article Labels

Article Tags

- Find more articles tagged with:
- GEL