Our block seminars
Concept Activation Vectors
In the last seminar of our XAI series, Iván Rodríguez from appliedAI talks about Concept Activation Vectors (CAVs). CAVs go beyond feature attribution and bring a quantitative approach to testing. He will discuss how this tool interprets a neural network’s internal state in terms of human-friendly concepts. Abstract: In this XAI seminar, we’ll start by diving into the testing with Concept Activation Vectors (TCAV) method, which helps us gauge how much a model’s prediction is influenced by a user-defined concept.
An information-theoretic perspective on model interpretation
In the ninth seminar of our XAI series, Kristof Schröder, Senior Research Engineer at appliedAI, will discuss how maximizing mutual information between selected features and the response variable can aid in model interpretation, by offering a unique, information-theoretic perspective on AI models. Abstract: Providing explainability in a model-agnostic way is a challenging task.
Effects of XAI on perception, trust and acceptance
This talk delves into the influence of Explainable Artificial Intelligence (XAI) on human cognition, trust, and acceptance of AI-driven systems. Through a review of empirical studies, this presentation illuminates how the provision of intelligible explanations shapes individuals’ perception of AI-generated outputs. By synthesizing findings from diverse contexts, we uncover critical insights into the mechanisms underlying the cognitive processing of explanations, shedding light on the factors that modulate trust and acceptance levels.
Latent space prototype interpretability: Strengths and shortcomings
Prototype-based approaches aim at training intrinsically interpretable models that nevertheless are as powerful as typical black-box neural networks. We introduce the main ideas behind this concept by explaining the original Prototypical Part Network (ProtoPNet) and the most recent Neural Prototype Tree (ProtoTree) model which combines prototypical learning with decision trees. We introduce some limitations of these approaches by underling the need to enhancing visual prototypes with textual quantitative information to understand better what a prototype represents.
Influence Diagnostics Under Self-Concordance
In our sixth seminar, we have the pleasure of receiving Jillian Fisher from the statistics department of the University of Washington, who will be presenting her recent work accepted at AISTATS2023, “Influence Diagnostics under Self-concordance”. Abstract: Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications.
Manifold Restricted Interventional Shapley Values
For our fifth installment in this series we are happy to host Muhammad Faaiz Taufiq, from Oxford University. Faaiz will introduce his recent work presented at AISTATS2023: ManifoldShap, a novel method for computing Shapley values that effectively circumvents the limitations of off-manifold and on-manifold methods. Abstract: Shapley values are model-agnostic methods for explaining model predictions.
Influence functions and Data Pruning: from theory to non-convergence
Today’s session brings Influence Functions under the spotlight - the theory, non-convergence issues, and uses for data pruning. Fabio will uncover the fragile nature of influence functions in deep learning, helping us understand what neural networks memorize, and exploring the possibility of beating power law scaling of model performance with dataset size.
Shapley values for XAI: the good, the bad and the ugly
In this talk Anes will ask questions like: What is the true significance of Shapley values as feature importance measures? How can Shapley Residuals help us quantify its limits? How can we better understand global feature contributions with additive importance measures? Join us as we merge game theory with machine learning in today’s session.
The debate on the accuracy-interpretability tradeoff
In this talk we delve into the gripping debate on the alleged trade-off between accuracy and interpretability. We’ll discuss the literature on the implications of enforcing interpretability as an optimization constraint, navigating the contention of opting for interpretable models over black box models. Join us in exploring simple and powerful models which you can use to understand your data better, while we ask ourselves whether it is worth trying to use these or not, and what new problems interpretability brings to the table.
Introduction to Explainable AI
Safety and reliability concerns are major obstacles for the adoption of AI in practice. In addition, European regulation will make explaining the decisions of models a requirement for so-called high-risk AI applications. Explainable AI (XAI) is an emergent field that tries to tackle AI related challenges by providing better insights into the decision-making process of machine learning models.
Sampling Free Epistemic Uncertainty Through Approximated Variance Propagation
Noise injection methods such as dropout are popular ways of implicitly capturing epistemic uncertainty in neural networks. Usually, noise injection is applied during training and inference. In the training phase, the networks learns to reduce the variance that is introduced by noise injection within the data distribution. Running inference several times on the same input with noise injection enabled makes it possible to estimate the remaining uncertainty, which will be mostly epistemic.
Uncertainty quantification with conformal prediction