AI Glossary/SHAP (SHapley Additive exPlanations)

What Is SHAP (SHapley Additive exPlanations)?

Definition

SHAP (SHapley Additive exPlanations) is an explainability method based on cooperative game theory that assigns each input feature a contribution value (Shapley value) for a specific prediction, providing a mathematically principled way to understand how each feature influences the model's output.

How SHAP (SHapley Additive exPlanations) Works

SHAP borrows the concept of Shapley values from game theory, where the goal is to fairly distribute a reward among players based on their contributions. In SHAP, the 'players' are input features and the 'reward' is the model's prediction. SHAP calculates each feature's marginal contribution by considering all possible combinations of features, producing both local explanations (why this specific prediction was made) and global explanations (which features are most important overall). SHAP provides several visualization tools — force plots, summary plots, and dependence plots — that make model behavior interpretable for non-technical stakeholders. It works with any model type and provides theoretically grounded, consistent explanations.

Real-World Examples

1

A SHAP force plot showing that a patient's high blood pressure (+0.3) and age (+0.2) pushed the heart disease prediction higher, while exercise (-0.15) pushed it lower

2

A data scientist using SHAP summary plots to explain to executives which features drive their customer churn model

3

A bank using SHAP values to provide legally required explanations for why a loan application was denied

Recommended Tools

Related Terms