AIML.com

Machine Learning Resources

What is Gradient Boosting (GBM)? Describe how does the Gradient Boosting algorithm work

Bookmark this question

Related Questions:
– What are the key hyperparameters for a GBM model?

– What are the advantages and disadvantages of a GBM model?
What does Gradient in Gradient Boosted Trees refer to?
– How is Gradient Boosting different from Random Forest?

GBM is an ensemble-based supervised machine learning algorithm that is suitable for both regression  and classification problems. The algorithm gets its name from the concept of Boosting, which is an iterative process that combines multiple weak learners to form a single strong learner. In this algorithm, a series of decision tree are trained sequentially, each trying to correct the errors of the previous tree. While bagging methods like Random Forest seek to reduce the prediction variance (overfitting), boosting techniques also minimize the bias, thus theoretically creating a highly accurate model that is able to generalize well to unseen data.

The basic idea of Gradient Boosting is to optimize a loss function by minimizing the residual error at each stage of the algorithm. The residual error is the difference between the predicted value and the true value of the target variable. The Gradient Boosting algorithm works as follows:

GBM for Regression

  1. Initialize the model: Start with a simple model, usually a decision tree with a small number of nodes.
  2. Compute the residual errors: Compute the difference between the predicted values and the true values of the target variable. These errors are used as the target variable for the next decision tree.
    gradient boosting_residual1 (y is the actual observation, is prediction from the first decision tree
    ( is prediction from the second decision tree. The second decision tree tries to predict the residual, ,rather than fitting on y)
    (Third decision tree predicts the residual of second decision tree and so on)
    .
    .
  3. Fit a decision tree: Fit a decision tree (weak learner) to the residual errors. The decision tree is trained to predict the residual errors from the input features.
  4. Update the model: Update the model by adding the newly trained decision tree to the model. The trees are added to the model using a learning rate parameter η, which is generally chosen to have very small values in the range 0.001 to 0.01. Use of learning parameter η to update the model is a regularization technique in gradient boosting by a method called shrinkage. Shrinkage leads to significant improvements in models’ generalization ability over a model without shrinkage (i.e. η = 1)
    For m = 1 to M iterations,

    where,
    F(x) represents the gradient boosting model function
    η = learning parameter (shrinkage parameter),
    hm(x) is the decision tree made on residuals,
    m is the number of decision tree
    s,
    and is the Loss function optimization parameter, given by this formula:loss function of gradient boosting

    Common loss functions used for optimization in Gradient Boosting are Mean Squared Error for Regression problem and Cross-Entropy for Classification problems
  5. Repeat: Repeat steps 2 to 4 until the desired level of accuracy is achieved or until the number of decision trees reaches a specified limit.
  6. Final model: The final model is a weighted sum of all of the individual trees. The small trees, along with a low learning rate, allows the model to learn slowly, and not overfit, thereby leading to a better performing model.
Gradient Boosting process
Gradient Boosting process

Gradient Boosting has several hyperparameters that can be tuned to improve the performance of the model. Some of the important hyperparameters are the number of decision trees, the maximum depth of each tree, the learning rate, and the loss function. The learning rate controls the contribution of each decision tree to the final prediction. A smaller learning rate results in a more stable model, while a larger learning rate can lead to overfitting. In GBM, there is no bootstrapping (or resampling) of training data while building individual trees, as the goal is for the model to fit to the residuals from all the available data.

Gradient Boosting is a powerful machine learning algorithm that can achieve high accuracy on a wide range of tasks. It is particularly useful for complex and high-dimensional data, where other algorithms may struggle to perform well. By combining multiple weak learners, Gradient Boosting can create a strong model that can generalize well to new data.

XGBoost, which stands for Extreme Gradient Boosting, is a modern open-source implementation of Gradient Boosting Machine. XGBoost has become increasingly popular for its scalability, accuracy, and speed.

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can