Decision Tree Review

A decision tree can be used to predict either categorical or continuous variables. The tree is built by recursively splitting the data into successively purer subsets of data. The purity of a node is looking at how “homogeneous” the node is with respect to the target variable. These splits are done based on some condition:

  • Measurements of purity - Gini, entropy, misclassification error rate

  • \(\chi^2\) tests (CHAID)

Most trees use binary splits (only allows for two options each time a variable is chosen), but some algorithms will do more. Every possible split is evaluated by one of the metrics above and the best split is chosen. This process is repeated until no more splits meet the criteria.

Decision Tree Example

Decision trees are great because they are very interpretable as you work down through the tree. However, if you are willing to forgo some of the interpretability, these trees can be ensembled to make better predictions.

This section will detail two common ensembling techniques - boosting and bagging - and the algorithms that derive from them.

Bagging

Before understanding the concept of bagging, we need to know what a bootstrap sample is. A Bootstrap sample is a random sample of your data with replacement that are the same size as your original data set. By random chance, some of the observations will not be sampled. These observations are called out-of-bag observations. Three example bootstrap samples of 10 observations labelled 1 through 10 is the following:

Bootstrap Sample Examples

Mathematically, a bootstrap sample will contain approximately 63% of the observations in the data set. The sample size of the bootstrap sample is the same as the original data set, just some observations are repeated. Bootstrap samples are used in simulation to help develop confidence intervals of complicated forecasting techniques. Bootstrap samples are also used to ensemble models using different training data sets - called bagging.

Bootstrap aggregating (bagging) is where you take k bootstrap samples and create a model using each of the bootstrap samples as the training data set. This will build k different models. We will ensemble these k different models together.

Let’s work through an example to see the potential value. The following 10 observations have the values of X and Y shown in the following table:

We will build a decision stump (a decision tree with only one split). From building a decision stump, the best accuracy is obtained by making the split at either 3.5 or 7.5. Either of these splits would lead to an accuracy of predicting Y at 70%. For example, let’s use the split at 3.5. That would mean we think everything above 3.5 is a 0 and everything below 3.5 is a 1. We would get observations 1, 2, and 3 all correct. However, since everything above 3.5 is considered a 0, we would only get observations 4, 5, 6, and 7 correct on that half - missing observations 8, 9, and 10.

To try and make this prediction better we will do the following:

  1. Take 10 bootstrap samples
  2. Build a decision stump for each
  3. Aggregate these rules into a voting ensemble
  4. Test the performance of the voting ensemble on the whole dataset

The following is the 10 bootstrap samples with their optimal splits. Remember, these bootstrap samples will not contain all of the observations in the original data set.

Some of these samples contain only one value of the target variable and so the predictions are the same for that bootstrap sample. However, we will use the optimal cut-offs from each of those samples to predict 1’s and 0’s for the original data set as shown in the table below:

Summary of Bootstrap Sample Predictions on Training Data

Let’s take a cut-off of 0.25 from our average values from the 1 and 0 predictions from each bootstrap sample. That would mean anything above 0.25 in the average row would be predicted as a 1 while everything below would be predicted as a 0. Based on those predictions (the Pred. row above) we get a perfect prediction compared to the true values of Y from our original data set.

In summary, bagging improves generalization error on models with high variance (simple tree-based models for example). If base classifier is stable (not suffering from high variance), then bagging can actually make predictions worse! Bagging does not focus on particular observations in the training data set (unlike boosting which is covered below).

Random Forests

Random forests are ensembles of decision trees (similar to the bagging example from above). Ensembles of decision trees work best when they find different patterns in the data. However, bagging tends to create trees that pick up the same pattern.

Random forests get around this correlation between trees by not only using bootstrapped samples, but also uses subsets of variables for each split and unpruned decision trees in each ensemble. For these unpruned trees, with a classification target it goes all the way until each unique observation is left in its own leaf. With regression trees the unpruned trees will continue until 5 observations are left per leaf. The results from all of these trees are ensembled together into one voting system. There are many choices of parameters to tune:

  1. Number of trees in the ensemble
  2. Number of variables for each split
  3. Depth of the tree (defaults to unpruned)

Building Random Forest

Let’s see random forests in each of our softwares!

Tuning Random Forest

There are a few things we can tune with a random forest. One is the number of trees used to build the model. Another thing to tune is the number of variables considered for each split - called mtry. By default, \(mtry = \sqrt{p}\), with \(p\) being the number of total variables. We can use cross-validation to tune these values.

Let’s see how to do this in each of our softwares!

Another thing to tune would be which variables to include in the random forest. By default, random forests use all the variables since they are averaged across all the trees used to build the model. There are a couple of ways to perform variable selection for random forests:

  • Many permutations of including/excluding variables

  • Compare variables to random variable

The first approach is rather straight forward but time consuming because you have to try and build models over and over again taking a variable out each time. The second approach is much easier to try. In that second approach we will create a completely random variable and put it in the model. We will then look at the variable importance of all the variables. The variables that are below the random variable should be considered for removal.

Let’s see this in each of our softwares!

“Interpretability”

Most machine learning models are not interpretable in the classical sense - as one predictor variable increases, the target variable tends to ___. This is because the relationships are not linear. The relationships are more complicated than a linear relationship, so the interpretations are as well. Similar to GAM’s we can get a general idea of an overall pattern for a predictor variable compared to a target variable - partial dependence plots.

These plots will be covered in much more detail in the final section of this code deck under Model Agnostic Interpretability. However, let’s see this in action in R from the options in the randomForest and pdp packages.

Code
partialPlot(rf.ames, training.df, Garage_Area)

This nonlinear and complex relationship between Garage_Area and Sale_Price is similar to the plot we saw earlier with the GAMs. This shouldn’t be too surprising. Both sets of algorithms are trying to relate these two variables together, just in different ways.

Summary

In summary, random forest models are good models to use for prediction, but explanation becomes more difficult and complex. Some of the advantages of using random forests:

  • Computationally fast (handles thousands of variables)

  • Trees trained simultaneously

  • Accurate classification model

  • Variable importance available

  • Missing data is OK

There are some disadvantages though:

  • “Interpretability” is a little different than the classical sense

  • There are parameters that need to be tuned to make the model the best

Boosting

Boosting is similar to bagging in the sense that we will draw a sample of observations from a data set with replacement. However, unlike bagging, observations are not sampled randomly. Boosting assigns weights to each training observation and uses the weight as a sampling distribution. Observations with higher weights are more likely to be drawn in the next sample. These weights are changed adaptively in each round. The weights for observations that are harder to classify are higher for the next sample - trying to increase chance of them being selected. An example of the difference in weights for a boosted sample as compared to a bagging sample is the following:

With bagging we are only trying to create variability in the models by using training data set variation. The ensemble model is built simultaneously. However, in boosting, observations with higher sampling probability were harder to predict accurately. We want to improve the predictions from the model sequentially.

Let’s use the same example we used in bagging. The following 10 observations have the values of X and Y shown in the following table:

We will build a decision stump (a decision tree with only one split). From building a decision stump, the best accuracy is obtained by making the split at either 3.5 or 7.5. Either of these splits would lead to an accuracy of predicting Y at 70%. For example, let’s use the split at 3.5. That would mean we think everything above 3.5 is a 0 and everything below 3.5 is a 1. We would get observations 1, 2, and 3 all correct. However, since everything above 3.5 is considered a 0, we would only get observations 4, 5, 6, and 7 correct on that half - missing observations 8, 9, and 10.

To try and make this prediction better we will do the following:

  1. Take 3 bootstrap samples

  2. Build a decision stump for each

  3. At each step we will adjust the weights based on the previous step’s errors

The following is the first 3 bootstrap samples with their optimal splits. Remember, these bootstrap samples will not contain all of the observations in the original data set.

Boosting Samples with Weights

The first sample has the same weight for each observation. In that bootstrap sample the best split occurs at 7.5. However, when predicting the original observations, we would incorrectly predict observations 1, 2, and 3. Therefore, in the second sample we will overweight observations 1, 2, and 3 to help us predict those observations better (correct for the previous mistakes). In the second bootstrap sample we have observations 1, 2, and 3 over-represented by design. However, this leads to incorrect predictions in observations 4 through 7. This will lead to us over-weighting those observations as seen in the third row of the weights table. Observations 8, 9, and 10 have always been correctly predicted so their weight is the smallest, while the other observations have higher weights.

The original boosted sample ensemble was called AdaBoost. Unlike bagging, boosted ensembles usually weight the votes of each classifier by a function of their accuracy. If the classifier gets the higher weighted observations wrong, it has a higher error rate. More accurate classifiers get a higher weight in the prediction. In simple terms, more accurate guesses are more important. We will not detail this algorithm here as there are more common advancements that have come since.

Gradient Boosting

Some more recent algorithms are moving away from the direct boosting approach applied to bootstrap samples. However, the main idea of these algorithms is still rooting in the notion of finding where you made mistakes and trying to improve on those mistakes.

One of the original algorithms to go with this approach is the gradient boosted machine (GBM). The idea behind the GBM is to build a simple model to predict the target (much like a decision tree or even decision stump):

\[ y = f_1(x) + \varepsilon_1 \]

That simple model has an error of \(\varepsilon_1\). The next step is to try to predict that error with another simple model:

\[ \varepsilon_1 = f_2(x) + \varepsilon_2 \]

This model has an error of \(\varepsilon_2\). We can continue to add model after model, each one predicting the error (residuals) from the previous round:

\[ y = f_1(x) + f_2(x) + \cdots + f_k(x) + \varepsilon_k \]

We will continue this process until there is a really small error which will lead to over-fitting. To protect against over-fitting, we will build in gradient boosting regularization through parameters to tune. One of those parameters to tune is \(\eta\), which is the weight applied to the error models:

\[ y = f_1(x) + \eta \times f_2(x) + \eta \times f_3(x) + \cdots + \eta \times f_k(x) + \varepsilon_k \]

Smaller values of \(\eta\) lead to less over-fitting. Other things to tune include the number of trees to use in the prediction (more trees typically lead to more over-fitting), and other parameters added over the years - \(\lambda\), \(\gamma\), \(L_2\), etc.

Gradient boosting as defined above yields an additive ensemble model. There is no voting or averaging of individual models as the models are built sequentially, not simultaneously. The predictions from these models are added together to get the final prediction. One of the big keys to gradient boosting is using weak learners. Weak learners are simple models. Shallow decision / regression trees are the best. Each of these models would make poor predictions on their own, but the additive fashion of the ensemble provides really good predictions.

These models are optimized to some form of a loss function. For example, linear regression looks at minimizing the sum of squared errors - its loss function. The sum of squared errors from many predictions would aggregate into a single number - the loss of the whole model. Gradient boosting is much the same, but not limited to sum of squared errors for the loss function.

How does the gradient boosting model find this optimal loss function value? It uses gradient descent. Gradient descent is a method that iteratively updates parameters in order to minimize a loss function (the sum of squared errors for example) by moving in the direction of the steepest descent as seen in the figure below:

Gradient Descent on Loss Function

This function minimizes the loss function by taking iteratively smaller steps until it finds the minimum. The step size is updated at each step to help with the minimization. The step size is updated with the learning rate. Without the learning rate, we might take steps too big and miss the minimum or too small and take a long time to optimize.

Not all functions are convex as some have local minimums or plateaus that make finding the global minimum difficult. Stochastic gradient descent attempts to solve this problem by randomly sampling a fraction of the training observations for each tree in the ensemble - not all trees contain the all of the observations. This makes the algorithm faster and more reliable, but may not always find the true overall minimum.

Grid search for these algorithms are very time consuming because of the time it takes ot build these models since they are built sequentially. To speed up this process, we typically set the tuning parameters to default and optimize the number of trees in the GBM. We then fix the tree count and start to tune other parameters. Then we go back and iteratively tune back and forth until we find an optimal solution.

There are many different adaptations to the gradient boosting approach:

  • XGBoost

  • LightGBM

  • CatBoost

  • etc.

XGBoost

Extreme gradient boosting (XGBoost) is one of the most popular versions of these algorithms. XGBoost has some great advantages over the traditional GBM:

  1. Additional regularization parameters to prevent over-fitting (problem of more things to tune).
  2. Settings to stop model assessment when adding more trees isn’t helpful.
  3. Supports parallel processing, but still must be trained sequentially.
  4. Variety of loss functions.
  5. Allows generalized linear models as well as tree-based models (all still weak learners though).
  6. Implemented in R, Python, Julia, Scala, Java, C++.

Let’s see how to build this in each of our softwares!

Variable Importance and Selection

XGBoost provides three built-in measures of variable importance:

  1. Gain - equivalent metric to random forests
  2. Coverage - measures the relative number of observations influenced by the variable
  3. Frequency - percentage of splits in the whole ensemble that use this variable

Let’s see this in each of our softwares!

Another thing to tune would be which variables to include in the XGBoost model. By default, XGBoost use all the variables since they are aaggregated across all the trees used to build the model. There are a couple of ways to perform variable selection for XGBoost:

  • Many permutations of including/excluding variables

  • Compare variables to random variable

The first approach is rather straight forward but time consuming because you have to try and build models over and over again taking a variable out each time. The second approach is much easier to try. In that second approach we will create a completely random variable and put it in the model. We will then look at the variable importance of all the variables. The variables that are below the random variable should be considered for removal.

Let’s see this in each of our softwares!

“Interpretability”

Most machine learning models are not interpretable in the classical sense - as one predictor variable increases, the target variable tends to ___. This is because the relationships are not linear. The relationships are more complicated than a linear relationship, so the interpretations are as well. Similar to random forests we can get a general idea of an overall pattern for a predictor variable compared to a target variable - partial dependence plots.

These plots will be covered in much more detail in the final section of this code deck under Model Agnostic Interpretability. However, let’s see this in action in R from the options in the xgboost and pdp packages.

Code
library(pdp)

xgb.ames <- xgboost(data = train_x, label = train_y, subsample = 1, nrounds = 24, eta = 0.25, max_depth = 5, objective = "reg:linear")
[21:07:02] WARNING: src/objective/regression_obj.cu:213: reg:linear is now deprecated in favor of reg:squarederror.
[1] train-rmse:150443.019681 
[2] train-rmse:115267.504962 
[3] train-rmse:89055.111389 
[4] train-rmse:69561.730180 
[5] train-rmse:55220.286080 
[6] train-rmse:44740.341203 
[7] train-rmse:36980.683350 
[8] train-rmse:31451.655598 
[9] train-rmse:27650.915529 
[10]    train-rmse:24945.269999 
[11]    train-rmse:23211.562386 
[12]    train-rmse:22012.817481 
[13]    train-rmse:21221.348708 
[14]    train-rmse:20602.110276 
[15]    train-rmse:20145.482316 
[16]    train-rmse:19789.451818 
[17]    train-rmse:19549.927138 
[18]    train-rmse:19182.487244 
[19]    train-rmse:19050.917338 
[20]    train-rmse:18579.468210 
[21]    train-rmse:18479.830197 
[22]    train-rmse:18164.219167 
[23]    train-rmse:18008.414679 
[24]    train-rmse:17887.039967 
Code
pdp::partial(xgb.ames, pred.var = "Garage_Area", 
        plot = TRUE, rug = TRUE, alpha = 0.1, plot.engine = "lattice", 
        train = train_x)

This nonlinear and complex relationship between Garage_Area and Sale_Price is similar to the plot we saw earlier with the GAMs and random forests. This shouldn’t be too surprising. All of these algorithms are trying to relate these two variables together, just in different ways.

Summary

In summary, XGBoost models are good models to use for prediction, but explanation becomes more difficult and complex. Some of the advantages of using XGBoost models:

  • Very accurate

  • Tend to outperform random forests when properly trained and tuned.

  • Variable importance provided

There are some disadvantages though:

  • “Interpretability” is a little different than the classical sense

  • Computationally slower than random forests due to sequentially building trees

  • More tuning parameters than random forests

  • Harder to optimize

  • More sensitive to tuning of parameters