Prototype-based approaches aim at training intrinsically interpretable models that nevertheless are as powerful as typical black-box neural networks. We introduce the main ideas behind this concept by explaining the original Prototypical Part Network (ProtoPNet) and the most recent Neural Prototype Tree (ProtoTree) model which combines prototypical learning with decision trees. We introduce some limitations of these approaches by underling the need to enhancing visual prototypes with textual quantitative information to understand better what a prototype represents. Furthermore we present two experiments to show some problems of prototype-based approaches due to the semantic gap between image input space and latent space. Finally we present some directions for future work towards more interpretable models and a first benchmark for interpretability.
Latent space prototype interpretability: Strengths and shortcomings
References
[Che19T]
This Looks Like That: Deep Learning for Interpretable Image Recognition,
[Hof21T]
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks,
[Nau21T]
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition,