Decisions made by Artificial Intelligence (AI) systems often achieve better performance compared to humans. Nevertheless, these decisions can still be unfair as proven by several studies. Therefore, research efforts are nowadays directed toward creating fair decision-making models (or algorithms). This is a challenging task due to several reasons, including, but not limited to, detecting the source of bias in the data-generating process, defining the applicable notion of fairness (outcome or treatment), finding how bias is encoded in the data (directly, indirectly or multi-dimensionally) and, making sure that the decision-making algorithms do not create, propagate or amplify existing biases. Guided by these challenges, Carmen and Lisa will present their work on preventing discrimination through decision-making algorithms. Their work is applicable in pattern classification settings. Lisa’s line of research focuses on measuring explicit and implicit bias using two mathematical theories that have never been deployed like this before: fuzzy-rough set theory and fuzzy cognitive maps. She will compare the two novel measures to existing ones and present their advantages and limitations. Carmen’s expertise lies in interrogating black-box models to discover their internal logic. Her method, called LUCID, uses gradient-based inverse design to generate the ideal input of a model. This approach provides insights into how the model reaches certain decisions, e.g., what features play a crucial role in the model’s decision-making process.
LUCID: Exposing Algorithmic Bias through Inverse Design,
AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both need to understand the origin of the bias and define what it means for an algorithm to make fair decisions. Most group fairness notions assess a model's equality of outcome by computing statistical metrics on the outputs. We argue that these output metrics encounter …
A fuzzy-rough uncertainty measure to discover bias encoded explicitly or implicitly in features of structured pattern classification datasets,
The need to measure bias encoded in tabular data that are used to solve pattern recognition problems is widely recognized by academia, legislators and enterprises alike. In previous work, we proposed a bias quantification measure, called fuzzy-rough uncertainty, which relies on the fuzzy-rough set theory. The intuition dictates that protected features should not change the fuzzy-rough boundary …
Modeling implicit bias with fuzzy cognitive maps,
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets where features can be numeric or discrete. In our proposal, problem features are mapped to neural concepts that are initially activated by experts when running what-if simulations, whereas weights connecting the neural concepts represent absolute correlation/association patterns between features. In …