Prototype-based approaches aim at training intrinsically interpretable models that nevertheless are as powerful as typical black-box neural networks. We introduce the main ideas behind this concept by explaining the original Prototypical Part Network (ProtoPNet) and the most recent Neural Prototype Tree (ProtoTree) model which combines prototypical learning with decision trees. We introduce some limitations of these approaches by underling the need to enhancing visual prototypes with textual quantitative information to understand better what a prototype represents. Furthermore we present two experiments to show some problems of prototype-based approaches due to the semantic gap between image input space and latent space. Finally we present some directions for future work towards more interpretable models and a first benchmark for interpretability.
This Looks Like That: Deep Learning for Interpretable Image Recognition,
When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the …
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks,
Deep neural networks that yield human interpretable decisions by architectural design have lately become an increasingly popular alternative to post hoc interpretation of traditional black-box models. Among these networks, the arguably most widespread approach is so-called prototype learning, where similarities to learned latent prototypes serve as the basis of classifying an unseen data point. In …
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition,
Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image “looks like” a prototype. However, perceptual similarity for humans can be different from the similarity learned by the classification model. Hence, only visualising prototypes can be insufficient for a user to understand what a …