Classifier calibration
For many applications of probabilistic classifiers it is important that the predicted confidence vectors reflect true probabilities (one says that the classifier is calibrated). Recently, it has been shown that common models fail to satisfy this property, making reliable methods for measuring and improving calibration important tools. Unfortunately, obtaining these is far from trivial, especially for problems with many classes. In this series we review, investigate and develop methods to address the issue.
Research feed
Other series in Trustworthy and interpretable ML
Check all of our work