This talk delves into the influence of Explainable Artificial Intelligence (XAI) on human cognition, trust, and acceptance of AI-driven systems. Through a review of empirical studies, this presentation illuminates how the provision of intelligible explanations shapes individuals’ perception of AI-generated outputs. By synthesizing findings from diverse contexts, we uncover critical insights into the mechanisms underlying the cognitive processing of explanations, shedding light on the factors that modulate trust and acceptance levels. This discourse aims to inspire a broader conversation on designing XAI systems that not only excel in performance but also empower users through comprehensible and trust-building explanations.
Effects of XAI on perception, trust and acceptance
References
[Buc20P]
Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems,
[Her23I]
Impact of explainable AI on cognitive load: Insights from an empirical study,
[Lei23E]
Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task,
[Shi21E]
The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI,
[Van21E]
Evaluating XAI: A comparison of rule-based and example-based explanations,