The two most common calibration approaches are:
(a) Platt scaling
(b) Isotonic regression
At a high level, Platt scaling fits a logistic regression on the original predictions, where the target is the original class labels and the input is the array of raw predicted probabilities. Isotonic regression follows a similar approach but instead fits a piecewise non-decreasing function to the original predictions rather than a logistic regression. Platt scaling tends to work better when the raw probabilities are not concentrated around 0 or 1, which tends to occur from Ensemble based methods like Random Forest and GBM. On the other hand, Isotonic regression provides better calibration for algorithms like Naive Bayes that produce many probabilities at the extremes.