AIML.com

Machine Learning Resources

What are options to calibrate probabilities produced from the output of a classifier that does not produce natural probabilities?

The two most common calibration approaches are:

(a) Platt scaling

(b) Isotonic regression

At a high level, Platt scaling fits a logistic regression on the original predictions, where the target is the original class labels and the input is the array of raw predicted probabilities. Isotonic regression follows a similar approach but instead fits a piecewise non-decreasing function to the original predictions rather than a logistic regression. Platt scaling tends to work better when the raw probabilities are not concentrated around 0 or 1, which tends to occur from Ensemble based methods like Random Forest and GBM. On the other hand, Isotonic regression provides better calibration for algorithms like Naive Bayes that produce many probabilities at the extremes. 

Partner Ad