The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

What are options to calibrate probabilities produced from the output of a classifier that does not produce natural probabilities?

Bookmark this question

The two most common calibration approaches are:

(a) Platt scaling

(b) Isotonic regression

At a high level, Platt scaling fits a logistic regression on the original predictions, where the target is the original class labels and the input is the array of raw predicted probabilities. Isotonic regression follows a similar approach but instead fits a piecewise non-decreasing function to the original predictions rather than a logistic regression. Platt scaling tends to work better when the raw probabilities are not concentrated around 0 or 1, which tends to occur from Ensemble based methods like Random Forest and GBM. On the other hand, Isotonic regression provides better calibration for algorithms like Naive Bayes that produce many probabilities at the extremes. 

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics