In this talk Anes will ask questions like: What is the true significance of Shapley values as feature importance measures? How can Shapley Residuals help us quantify its limits? How can we better understand global feature contributions with additive importance measures? Join us as we merge game theory with machine learning in today’s session.
Shapley values for XAI: the good, the bad and the ugly
References
[Kum20P]
Problems with Shapley-value-based explanations as feature importance measures,
[Kum21S]
Shapley Residuals: Quantifying the limits of the Shapley value for explanations,
[Mer19E]
The Explanation Game: Explaining Machine Learning Models Using Shapley Values,
[Cov20U]
Understanding Global Feature Contributions With Additive Importance Measures,