We review work on the operational integrity of ML models, focusing on their predictability and stability in varied and unpredictable environments. We explore mechanisms to detect irregularities and anomalies that can compromise functionality and security in applications, and look at rigorous verification methods to ensure the trustworthiness of ML systems.
This realm is of pivotal importance due to the increasing integration of ML models into critical and sensitive applications, where their malfunction or compromise can lead to significant, sometimes irreversible, consequences. Our focus on this area is driven by the imperative need to understand and address the inherent vulnerabilities and uncertainties associated with developing and deploying ML systems.