The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

What do you mean by calibration quality? How can calibration quality be detected from the output of an algorithm?

Bookmark this question

A calibration, or reliability curve, is the standard way to assess the calibration quality of a classifier’s predictions. In order to create a calibration curve, the predicted scores are first binned into discrete intervals, such as deciles. If there are enough observations, more intervals tend to produce better plots. Within each bin, the average predicted probability of observations in that bin is plotted on the x-axis, and the overall proportion of positive labels is plotted on the y-axis.

A perfectly calibrated classifier is represented by a line with a slope of 1, meaning the overall proportion of positive labels is equal to the average predicted probability within each bin. If the average predicted probability trends higher than the observed proportions, the classifier is overestimating the actual probability of success, and if the observed proportions trend higher than the average predictions, the classifier is underestimating the success probability. In the example curve below, the classifier overestimates the actual success probability in the lower deciles and underestimates it in the upper deciles.

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics