The blog post introduces how gradient boost works in machine learning.

While there are many mechanisms, such as pruning, to prevent trees from overfitting, decision trees still tend to struggle with high-dimensional and complex data. This happens because the trees inevitably need to become more complex in order to model such data accurately, without underfitting.
Bagging & Boosting
How do we maintain the simplicity of the model while ensuring it can fit well on high-dimensional and complex data? One solution to this problem is bagging, where we build many simple trees and combine their results to make a prediction. The most intuitive way to do this is by taking random samples and random features to train many trees, and then making predictions based on a majority vote. This is called Random Forest, which is a surprisingly effective technique (I won’t cover it here).

Another method of combining small trees is to create a chain of small trees, where each tree tries to correct the errors made by the previous ones. This technique is called boosting, and it turns out to be quite effective in learning highly complex data. In this article, I would like to dive deep into boosting by looking at an example known as Gradient Boost.
Gradient Boost
The algorithm for gradient boost can be defined as follows:
Input: Data and a differentiable loss function .
Step 1: Initialize model with a constant value:
Step 2: for to :
- Compute for .
- Fit a tree to the values and create terminal regions for .
- For compute
- Update
In this context, and represent explanatory variables and predictor variable for each data point . We start with a constant initial prediction and generate number of trees that fit to the residuals of previous prediction . (The mechanism for fitting a tree is covered in a previous article on decision trees.)
The trees are chained together by adding the product of the output values of the tree and the learning rate . Typically, is set to ~100, is set to a value between 0 and 1, and the trees are constructed with 8 to 32 leaf nodes. It has been empirically tested that gradient boost works best when many small trees are combined in this manner.
You might notice how the algorithm is similar to gradient descent, in that we gradually update the model's parameters by taking the gradient/residual between the true and predicted values and multiplying it by the learning rate. It turns out that this similar mechanism also works well with trees.
Regression
Let’s see how gradient boost works in regression. While the algorithm may seem complex, it is surprisingly simple to apply it to regression problems. First, let’s set the loss function to be half of the squared residuals.
We use half of the squared residuals because the constant 2 will cancel out when taking the derivatives to find the that minimizes the loss. Now, let’s move to step 1, where we set the initial prediction such that it satisfies the following:
We can take the derivative of the above with respect to , set it equal to 0, and solve the equation to get the that minimises the sum of losses.
Hence, the initial prediction is just the average of the predictor variable:
Now, we are ready to move to step 2. First, we need to compute the residuals :
We can fit a new tree to the residuals computed above. Then, for each leaf node, we compute using the following.
We can take the derivative of the sum of losses with respect to to find the that leads to the least sum of losses:
From the above, we can see that is always the average residual in . We can use these s as output values of leaf nodes in each tree. Then, the result from the new tree is multiplied by the learning rate to create a new .
Thus, the steps can be rewritten as follows:
Input: Data and a differentiable loss function, half of the squared residuals.
Step 1: Initialize model with a constant value, as the average value of .
Step 2: for to :
- Compute the pseudo residual for .
- Fit a tree to the values and create terminal regions for .
- For compute as average residual in .
- Update
Binary Classification
The strategy for gradient boost in binary classification is similar to that in regression, but there are a few important details to note. Just like in logistic regression, the tree will fit to the log-odds by minimizing the negative log-likelihood:
Let’s express the negative log-likelihood with respect to , the predicted log-odds, instead of .
For step 1, we need to take the derivative of the loss function with respect to .
is the probability associated with the log-odds, . Then, we want to minimize the sum of losses as follows.
We can see that the probability that achieves the minimum sum of losses is given by the probability of class being 1. Let’s compute the from using logit function.
The above reveals that the log-odds of the class being 1 is the initial prediction of log-odds, or , that minimizes the negative log-likelihood loss. For step 2, we need to calculate :
We can see that we need to fit the tree to the pseudo-residuals between the class and the predicted probability from the previous prediction. Then, we need to compute the output values, s, to update our prediction. The output value needs to be the value that minimizes the sum of losses.
However, taking the derivative of the above and solving for will be extremely hard, we approximate the derivative of the above with the second-order Taylor polynomial. (The reason why the second-order Taylor polynomial approximation is a good approximation is beyond the scope of this article. You just need to know that the approximation works by setting the intercept, first-order derivative, and second-order derivative (value, slope, concavity) to be the same. If you want to learn more about it, I recommend checking out Taylor series | Chapter 11, Essence of calculus from 3Blue1Brown.)
Setting the above equal to zero, we can solve for :
We notice that the numerator is the residual we fitted using the tree. We just need to divide that by the derivative of the residual with respect to . To do so, we can express the last term as a product and use the product rule as follows:
The denominator turns out to be the product of previous predicted probability and . You can carry out more maths and find out that when taking the derivative of the sum approximated in , we get the sum of residuals divided by the sum of the product of and . Thus, the value of that minimizes the sum of losses for the region is:
We can compute the above output for each region and update the with the learning rate .
From the above, the steps can be rewritten as follows:
Input: Data and a differentiable loss function, negative log-likelihood.
Step 1: Initialize model with a constant value, as the log-odds of .
Step 2: for to :
- Compute the pseudo residual for .
- Fit a tree to the values and create terminal regions for .
- For compute as sum of residuals divided by the sum of in .
- Update
The logic behind gradient boosting for multi-class classification is similar to what we saw above. We use the softmax
function instead of the logistic function to map the probability to the log-odds and fit the trees to the log-odds using
the same methods. As the article is getting quite lengthy, I will leave the multi-class classification and code
implementation to you. You can use GradientBoostRegressor
and GradientBoostClassifier
from sklaern
library to apply
the above techniques.
Pseudo-Residual for Binary Classification
When we computed the derivative of the approximation of the loss function with respect to , we arrived at the residuals divided by the product of the previously predicted probability and 1 minus the previously predicted probability. It makes sense mathematically, but how does it impact the value of pseudo-residuals? To understand the impact, let’s plot the function .
As you can see from the above, it has a nice curve that results in 0 when or , and has a maximum point at . Since it is in the denominator, the pseudo-residuals will be higher when the previous prediction was close to extreme values like 0 or 1, while they will be relatively smaller when the previous prediction is around 0.5. We can see that this aligns with how we want the model to learn from the data. We want the model to adjust quickly when the prediction is extreme and on the opposite side, while being cautious when approaching extreme predictions when the current prediction is around 0.5, in order to avoid overfitting. Thus, it makes sense to divide the residual by conceptually as well.
Resources
- StatQuest with Josh Starmer. 2019. Gradient Boost Part 2 (of 4): Regression Details. YouTube.
- StatQuest with Josh Starmer. 2019. Gradient Boost Part 4 (of 4): Classification Details. YouTube.