In this talk we delve into the gripping debate on the alleged trade-off between accuracy and interpretability. We’ll discuss the literature on the implications of enforcing interpretability as an optimization constraint, navigating the contention of opting for interpretable models over black box models. Join us in exploring simple and powerful models which you can use to understand your data better, while we ask ourselves whether it is worth trying to use these or not, and what new problems interpretability brings to the table.

The debate on the accuracy-interpretability tradeoff

# References

[Ang18L]

Learning Certifiably Optimal Rule Lists for Categorical Data,

[Bel22I]

It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy,

[Bro23T]

Toward a taxonomy of trust for probabilistic machine learning,

[Che18I]

An Interpretable Model with Globally Consistent Explanations for Credit Risk,

[Che19T]

This Looks Like That: Deep Learning for Interpretable Image Recognition,

[Che23U]

Understanding and Exploring the Whole Set of Good Sparse Generalized Additive Models,

[Dzi20E]

Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability,

[Gel13B]

Bayesian Data Analysis, Third Edition,

[Hof21T]

This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks,

[Hu19O]

Optimal Sparse Decision Trees,

[Jac21H]

How machine-learning recommendations influence clinician treatment selections: The example of antidepressant selection,

[Lin20G]

Generalized and Scalable Optimal Sparse Decision Trees,

[Mct22F]

Fast Sparse Decision Tree Optimization via Reference Ensembles,

[Pas22O]

Overreliance on AI: Literature review,

[Pou21M]

Manipulating and Measuring Model Interpretability,

[Rib20A]

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList,

[Rud19S]

The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to Be More Effective at Data Analysis,

[Rud19S]

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,

[Sem22E]

On the Existence of Simpler Machine Learning Models,

[Ust19L]

Learning Optimized Risk Scores,

[Ust13S]

Supersparse linear integer models for predictive scoring systems,

[Xin22E]

Exploring the Whole Rashomon Set of Sparse Decision Trees,

[Yu20O]

Optimal Decision Lists using SAT,

[Zha23O]

Optimal Sparse Regression Trees,