Safety and reliability concerns are major obstacles for the adoption of AI in practice. In addition, European regulation will make explaining the decisions of models a requirement for so-called high-risk AI applications. Explainable AI (XAI) is an emergent field that tries to tackle AI related challenges by providing better insights into the decision-making process of machine learning models.
In this introductory talk to our seminar series on Explainable AI we discuss the general motivation and goals of XAI, as well as a taxonomy of XAI methods. Finally, we give a short overview over to the companion workshop “Methods and Issues in Explainable AI” and the content of the upcoming talks.