Decisions made by Artificial Intelligence (AI) systems often achieve better performance compared to humans. Nevertheless, these decisions can still be unfair as proven by several studies. Therefore, research efforts are nowadays directed toward creating fair decision-making models (or algorithms). This is a challenging task due to several reasons, including, but not limited to, detecting the source of bias in the data-generating process, defining the applicable notion of fairness (outcome or treatment), finding how bias is encoded in the data (directly, indirectly or multi-dimensionally) and, making sure that the decision-making algorithms do not create, propagate or amplify existing biases. Guided by these challenges, Carmen and Lisa will present their work on preventing discrimination through decision-making algorithms. Their work is applicable in pattern classification settings. Lisa’s line of research focuses on measuring explicit and implicit bias using two mathematical theories that have never been deployed like this before: fuzzy-rough set theory and fuzzy cognitive maps. She will compare the two novel measures to existing ones and present their advantages and limitations. Carmen’s expertise lies in interrogating black-box models to discover their internal logic. Her method, called LUCID, uses gradient-based inverse design to generate the ideal input of a model. This approach provides insights into how the model reaches certain decisions, e.g., what features play a crucial role in the model’s decision-making process.
Preventing bias through decision-making algorithms
References
[Maz22L]
LUCID: Exposing Algorithmic Bias through Inverse Design,
[Nap22F]
A fuzzy-rough uncertainty measure to discover bias encoded explicitly or implicitly in features of structured pattern classification datasets,
[Nap22M]
Modeling implicit bias with fuzzy cognitive maps,