Decision Tree Review

A decision tree can be used to predict either categorical or continuous variables. The tree is built by recursively splitting the data into successively purer subsets of data. The purity of a node is looking at how “homogeneous” the node is with respect to the target variable. These splits are done based on some condition:

  • Measurements of purity - Gini, entropy, misclassification error rate

  • \(\chi^2\) tests (CHAID)

Most trees use binary splits (only allows for two options each time a variable is chosen), but some algorithms will do more. Every possible split is evaluated by one of the metrics above and the best split is chosen. This process is repeated until no more splits meet the criteria.

Decision Tree Example

Decision trees are great because they are very interpretable as you work down through the tree. However, if you are willing to forgo some of the interpretability, these trees can be ensembled to make better predictions.

This section will detail two common ensembling techniques - boosting and bagging - and the algorithms that derive from them.

Bagging

Before understanding the concept of bagging, we need to know what a bootstrap sample is. A Bootstrap sample is a random sample of your data with replacement that are the same size as your original data set. By random chance, some of the observations will not be sampled. These observations are called out-of-bag observations. Three example bootstrap samples of 10 observations labelled 1 through 10 are listed below:

Bootstrap Sample Examples

Mathematically, a bootstrap sample will contain approximately 63% of the observations in the data set. The sample size of the bootstrap sample is the same as the original data set, just some observations are repeated. Bootstrap samples are used in simulation to help develop confidence intervals of complicated forecasting techniques. Bootstrap samples are also used to ensemble models using different training data sets - called bagging.

Bootstrap aggregating (bagging) is where you take k bootstrap samples and create a model using each of the bootstrap samples as the training data set. This will build k different models. We will ensemble these k different models together.

Let’s work through an example to see the potential value. The following 10 observations have the values of X and Y shown in the following table:

We will build a decision stump (a decision tree with only one split). From building a decision stump, the best accuracy is obtained by making the split at either 3.5 or 7.5. Either of these splits would lead to an accuracy of predicting Y at 70%. For example, let’s use the split at 3.5. That would mean we think everything above 3.5 is a 0 and everything below 3.5 is a 1. We would get observations 1, 2, and 3 all correct. However, since everything above 3.5 is considered a 0, we would only get observations 4, 5, 6, and 7 correct on that piece - missing observations 8, 9, and 10.

To try and make this prediction better we will do the following:

  1. Take 10 bootstrap samples
  2. Build a decision stump for each
  3. Aggregate these rules into a voting ensemble
  4. Test the performance of the voting ensemble on the whole dataset

The following is the 10 bootstrap samples with their optimal splits. Remember, these bootstrap samples will not contain all of the observations in the original data set.

Some of these samples contain only one value of the target variable and so the predictions are the same for that bootstrap sample. However, we will use the optimal cut-offs from each of those samples to predict 1’s and 0’s for the original data set as shown in the table below:

Summary of Bootstrap Sample Predictions on Training Data

The table above has one row for each of the predictions from the 10 bootstrap samples. We can average these predictions of 1’s and 0’s together for each observation to get the average row near the bottom.

Let’s take a cut-off of 0.25 from our average values from the 1 and 0 predictions from each bootstrap sample. That would mean anything above 0.25 in the average row would be predicted as a 1 while everything below would be predicted as a 0. Based on those predictions (the Pred. row above) we get a perfect prediction compared to the true values of Y from our original data set.

In summary, bagging improves generalization error on models with high variance (simple tree-based models for example). If base classifier is stable (not suffering from high variance), then bagging can actually make predictions worse! Bagging does not focus on particular observations in the training data set (unlike boosting which is covered below).

Random Forests

Random forests are ensembles of decision trees (similar to the bagging example from above). Ensembles of decision trees work best when they find different patterns in the data. However, bagging tends to create trees that pick up the same pattern.

Random forests get around this correlation between trees by not only using bootstrapped samples, but also uses subsets of variables for each split and unpruned decision trees. For these unpruned trees, with a classification target it goes all the way until each unique observation is left in its own leaf. With regression trees the unpruned trees will continue until 5 observations are left per leaf. The results from all of these trees are ensembled together into one voting system. There are many choices of parameters to tune:

  1. Number of trees in the ensemble
  2. Number of variables for each split
  3. Depth of the tree (defaults to unpruned)

Building Random Forest

Let’s see random forests in each of our softwares!

Tuning Random Forest

There are a few things we can tune with a random forest. One is the number of trees used to build the model. Another thing to tune is the number of variables considered for each split - called mtry. By default, \(mtry = \sqrt{p}\), with \(p\) being the number of total variables. We can use cross-validation to tune these values.

Let’s see how to do this in each of our softwares!

Another thing to tune would be which variables to include in the random forest. By default, random forests use all the variables since they are averaged across all the trees used to build the model. There are a couple of ways to perform variable selection for random forests:

  • Many permutations of including/excluding variables

  • Compare variables to random variable

The first approach is rather straight forward but time consuming because you have to try and build models over and over again taking a variable out each time. The second approach is much easier to try. In that second approach we will create a completely random variable and put it in the model. We will then look at the variable importance of all the variables. The variables that are below the random variable should be considered for removal.

Let’s see this in each of our softwares!

“Interpretability”

Most machine learning models are not interpretable in the classical sense - as one predictor variable increases, the target variable tends to ___. This is because the relationships are not linear. The relationships are more complicated than a linear relationship, so the interpretations are as well. Similar to GAM’s we can get a general idea of an overall pattern for a predictor variable compared to a target variable - partial dependence plots.

These plots will be covered in much more detail in the final section of this code deck under Model Agnostic Interpretability. However, let’s see this in action in R from the options in the randomForest and pdp packages.

Code
partialPlot(rf.ames, training.df, Garage_Area)

This nonlinear and complex relationship between Garage_Area and Sale_Price is similar to the plot we saw earlier with the GAMs. This shouldn’t be too surprising. Both sets of algorithms are trying to relate these two variables together, just in different ways.

Summary

In summary, random forest models are good models to use for prediction, but explanation becomes more difficult and complex. Some of the advantages of using random forests:

  • Computationally fast (handles thousands of variables)

  • Trees trained simultaneously

  • Accurate classification model

  • Variable importance available

  • Missing data is OK

There are some disadvantages though:

  • “Interpretability” is a little different than the classical sense

  • There are parameters that need to be tuned to make the model the best

Boosting

Boosting is similar to bagging in the sense that we will draw a sample of observations from a data set with replacement. However, unlike bagging, observations are not sampled randomly. Boosting assigns weights to each training observation and uses the weight as a sampling distribution. Observations with higher weights are more likely to be drawn in the next sample. These weights are changed adaptively in each round. The weights for observations that are harder to classify are higher for the next sample - trying to increase chance of them being selected. We want these missed observations to have a higher chance of being in the next model to improve the chances of predicting those observations correctly the next time. An example of the difference in weights for a boosted sample as compared to a bagging sample is the following:

With bagging we are only trying to create variability in the models by using training data set variation. The ensemble model is built simultaneously. However, in boosting, observations with higher sampling probability were harder to predict accurately. We want to improve the predictions from the model sequentially.

Let’s use the same example we used in bagging. The following 10 observations have the values of X and Y shown in the following table:

We will build a decision stump (a decision tree with only one split). From building a decision stump, the best accuracy is obtained by making the split at either 3.5 or 7.5. Either of these splits would lead to an accuracy of predicting Y at 70%. For example, let’s use the split at 3.5. That would mean we think everything above 3.5 is a 0 and everything below 3.5 is a 1. We would get observations 1, 2, and 3 all correct. However, since everything above 3.5 is considered a 0, we would only get observations 4, 5, 6, and 7 correct on that half - missing observations 8, 9, and 10.

To try and make this prediction better we will do the following:

  1. Take 3 bootstrap samples

  2. Build a decision stump for each

  3. At each step we will adjust the weights based on the previous step’s errors

The following is the first 3 bootstrap samples with their optimal splits. Remember, these bootstrap samples will not contain all of the observations in the original data set.

Boosting Samples with Weights

The first sample has the same weight for each observation. In that bootstrap sample the best split occurs at 7.5. However, when predicting the original observations, we would incorrectly predict observations 1, 2, and 3. Therefore, in the second sample we will overweight observations 1, 2, and 3 to help us predict those observations better (correct for the previous mistakes). In the second bootstrap sample we have observations 1, 2, and 3 over-represented by design. However, this leads to incorrect predictions in observations 4 through 7. This will lead to us over-weighting those observations as seen in the third row of the weights table. Observations 8, 9, and 10 have always been correctly predicted so their weight is the smallest, while the other observations have higher weights.

The original boosted sample ensemble was called AdaBoost. Unlike bagging, boosted ensembles usually weight the votes of each classifier by a function of their accuracy. If the classifier gets the higher weighted observations wrong, it has a higher error rate. More accurate classifiers get a higher weight in the prediction. In simple terms, more accurate guesses are more important. We will not detail this algorithm here as there are more common advancements that have come since.

Gradient Boosting

Some more recent algorithms are moving away from the direct boosting approach applied to bootstrap samples. However, the main idea of these algorithms is still rooted in the notion of finding where you made mistakes and trying to improve on those mistakes.

One of the original algorithms to go with this approach is the gradient boosted machine (GBM). The idea behind the GBM is to build a simple model to predict the target (much like a decision tree or even decision stump):

\[ y = f_1(x) + \varepsilon_1 \]

That simple model has an error of \(\varepsilon_1\). The next step is to try to predict that error with another simple model:

\[ \varepsilon_1 = f_2(x) + \varepsilon_2 \]

This model has an error of \(\varepsilon_2\). We can continue to add model after model, each one predicting the error (residuals) from the previous round:

\[ y = f_1(x) + f_2(x) + \cdots + f_k(x) + \varepsilon_k \]

We will continue this process until there is a really small error which will lead to over-fitting. To protect against over-fitting, we will build in gradient boosting regularization through parameters to tune. One of those parameters to tune is \(\eta\), which is the weight applied to the error models:

\[ y = f_1(x) + \eta \times f_2(x) + \eta \times f_3(x) + \cdots + \eta \times f_k(x) + \varepsilon_k \]

Smaller values of \(\eta\) lead to less over-fitting. Other things to tune include the number of trees to use in the prediction (more trees typically lead to more over-fitting), and other parameters added over the years - \(\lambda\), \(\gamma\), \(L_2\), etc.

Gradient boosting as defined above yields an additive ensemble model. There is no voting or averaging of individual models as the models are built sequentially, not simultaneously. The predictions from these models are added together to get the final prediction. One of the big keys to gradient boosting is using weak learners. Weak learners are simple models. Shallow decision / regression trees are the best. Each of these models would make poor predictions on their own, but the additive fashion of the ensemble provides really good predictions.

These models are optimized to some form of a loss function. For example, linear regression looks at minimizing the sum of squared errors - its loss function. The sum of squared errors from many predictions would aggregate into a single number - the loss of the whole model. Gradient boosting is much the same, but not limited to sum of squared errors for the loss function.

How does the gradient boosting model find this optimal loss function value? It uses gradient descent. Gradient descent is a method that iteratively updates parameters in order to minimize a loss function (the sum of squared errors for example) by moving in the direction of the steepest descent as seen in the figure below:

Gradient Descent on Loss Function

This function minimizes the loss function by taking iteratively smaller steps until it finds the minimum. The step size is updated at each step to help with the minimization. The step size is updated with the learning rate. Without the learning rate, we might take steps too big and miss the minimum or too small and take a long time to optimize.

Not all functions are convex as some have local minimums or plateaus that make finding the global minimum difficult. Stochastic gradient descent attempts to solve this problem by randomly sampling a fraction of the training observations for each tree in the ensemble - not all trees contain the all of the observations. This makes the algorithm faster and more reliable, but may not always find the true overall minimum.

Grid search for these algorithms are very time consuming because of the time it takes ot build these models since they are built sequentially. To speed up this process, we typically set the tuning parameters to default and optimize the number of trees in the GBM. We then fix the tree count and start to tune other parameters. Then we go back and iteratively tune back and forth until we find an optimal solution.

There are many different adaptations to the gradient boosting approach:

  • XGBoost

  • LightGBM

  • CatBoost

  • etc.

XGBoost

Extreme gradient boosting (XGBoost) is one of the most popular versions of these algorithms. XGBoost has some great advantages over the traditional GBM:

  1. Additional regularization parameters to prevent over-fitting (problem of more things to tune).
  2. Settings to stop model assessment when adding more trees isn’t helpful.
  3. Supports parallel processing, but still must be trained sequentially.
  4. Variety of loss functions.
  5. Allows generalized linear models as well as tree-based models (all still weak learners though).
  6. Implemented in R, Python, Julia, Scala, Java, C++.

Let’s see how to build this in each of our softwares!

Variable Importance and Selection

XGBoost provides three built-in measures of variable importance:

  1. Gain - equivalent metric to random forests
  2. Coverage - measures the relative number of observations influenced by the variable
  3. Frequency - percentage of splits in the whole ensemble that use this variable

Let’s see this in each of our softwares!

Another thing to tune would be which variables to include in the XGBoost model. By default, XGBoost use all the variables since they are aggregated across all the trees used to build the model. There are a couple of ways to perform variable selection for XGBoost:

  • Many permutations of including/excluding variables

  • Compare variables to random variable

The first approach is rather straight forward but time consuming because you have to try and build models over and over again taking a variable out each time. The second approach is much easier to try. In that second approach we will create a completely random variable and put it in the model. We will then look at the variable importance of all the variables. The variables that are below the random variable should be considered for removal.

Let’s see this in each of our softwares!

“Interpretability”

Most machine learning models are not interpretable in the classical sense - as one predictor variable increases, the target variable tends to ___. This is because the relationships are not linear. The relationships are more complicated than a linear relationship, so the interpretations are as well. Similar to random forests we can get a general idea of an overall pattern for a predictor variable compared to a target variable - partial dependence plots.

These plots will be covered in much more detail in the final section of this code deck under Model Agnostic Interpretability. However, let’s see this in action in R from the options in the xgboost and pdp packages.

Code
library(pdp)

xgb.ames <- xgboost(data = train_x, label = train_y, subsample = 1, nrounds = 24, eta = 0.25, max_depth = 5, objective = "reg:linear")
[12:20:52] WARNING: src/objective/regression_obj.cu:213: reg:linear is now deprecated in favor of reg:squarederror.
[1] train-rmse:150443.019681 
[2] train-rmse:115267.504962 
[3] train-rmse:89055.111389 
[4] train-rmse:69561.730180 
[5] train-rmse:55220.286080 
[6] train-rmse:44740.341203 
[7] train-rmse:36980.683350 
[8] train-rmse:31451.655598 
[9] train-rmse:27650.915529 
[10]    train-rmse:24945.269999 
[11]    train-rmse:23211.562386 
[12]    train-rmse:22012.817481 
[13]    train-rmse:21221.348708 
[14]    train-rmse:20602.110276 
[15]    train-rmse:20145.482316 
[16]    train-rmse:19789.451818 
[17]    train-rmse:19549.927138 
[18]    train-rmse:19182.487244 
[19]    train-rmse:19050.917338 
[20]    train-rmse:18579.468210 
[21]    train-rmse:18479.830197 
[22]    train-rmse:18164.219167 
[23]    train-rmse:18008.414679 
[24]    train-rmse:17887.039967 
Code
pdp::partial(xgb.ames, pred.var = "Garage_Area", 
        plot = TRUE, rug = TRUE, alpha = 0.1, plot.engine = "lattice", 
        train = train_x)

This nonlinear and complex relationship between Garage_Area and Sale_Price is similar to the plot we saw earlier with the GAMs and random forests. This shouldn’t be too surprising. All of these algorithms are trying to relate these two variables together, just in different ways.

Summary

In summary, XGBoost models are good models to use for prediction, but explanation becomes more difficult and complex. Some of the advantages of using XGBoost models:

  • Very accurate

  • Tend to outperform random forests when properly trained and tuned.

  • Variable importance provided

There are some disadvantages though:

  • “Interpretability” is a little different than the classical sense

  • Computationally slower than random forests due to sequentially building trees

  • More tuning parameters than random forests

  • Harder to optimize

  • More sensitive to tuning of parameters

Explainable Boosting Machine (EBM)

A newer algorithm has come onto the scene that tries to have the predictive power seen in XGBoost models but maintain the kind of interpretability that GAM models have. This model is called the explainable boosting machine (EBM).

We covered GAM’s in the previous section, but as a reminder, GAM’s provide a general framework for the adding of non-linear functions together instead of the typical linear structure. The structure of GAM’s are the following:

\[ y = \beta_0 + f_1(x_1) + \cdots + f_p(x_p) + \varepsilon \]

The \(f_i(x_i)\) functions are complex, nonlinear functions on the predictor variables. GAM’s add these complex, yet individual functions together. This allows for many complex relationships to try and model with to potentially predict your target variable better.

EBM’s use machine learning algorithms (like random forests and boosting trees) to build these individual pieces, \(f_i (x_i)\), before adding them together in the GAM. To do this, they build out the same GBM structure but only use one variable at a time. The idea behind the GBM is to build a simple model to predict the target (much like a decision tree or even decision stump):

\[ y = g_1(x_1) + \varepsilon_1 \]

However, in the EBM that simple model is built only off of one variable, say \(x_1\). That simple model has an error of \(\varepsilon_1\). The next step is to try to predict that error with another simple model only using one other variable, say \(x_2\):

\[ \varepsilon_1 = g_2(x_2) + \varepsilon_2 \]

This model has an error of \(\varepsilon_2\). We can continue to add model after model, each one predicting the error (residuals) from the previous round, but only ever using one variable:

\[ y = g_1(x_1) + g_2(x_2) + \cdots + g_k(x_k) + \varepsilon_k \]

We will continue this process until we use all of the variables. We will then repeat this process in a round robin format, but still looking at the residuals from the previous set of models:

\[ y = g_1(x_1) + g_2(x_2) + \cdots + g_k(x_k) + \varepsilon_k \]

\[ \varepsilon_k = g_{1^*}(x_1) + g_{2^*}(x_2) + \cdots + g_{k^*}(x_k) + \varepsilon_{k^*} \]

\[ \varepsilon_{k^{*}} = g_{1^{**}}(x_1) + g_{2^{**}}(x_2) + \cdots + g_{k^{**}}(x_k) + \varepsilon_{k^{**}} \]

\[ \vdots \]

This will be repeated in a round robin format for thousands of iterations similar to a random forest approach where size of the iterations help find all the signal. We will apply a small learning rate to each of the subsequent models (each trees contribution to the running residual is scaled by a small number) so that the order of the variables will not matter. For each of the small models containing only the variable \(x_1\), \(g_1(x_1)\), \(g_{1^*}(x_1)\), \(g_{1^{**}}(x_1)\), etc. , we will aggregate them together to form our idea of the overall relationship between \(x_1\) and \(y\), our \(f_1(x_1)\) for our GAM:

\[ y = \beta_0 + f_1(x_1) + \cdots + f_p(x_p) + \varepsilon \]

We will then repeat this process for all of the variables in the model. Essentially, we are developing each of the pieces of the GAM (the individual variable relationships) and then adding them together to form our overall model.

These allow for individually developed relationships for variables which makes the interpretability more similar to that of the traditional GAM’s, but with the predictive power of the more advanced machine learning boosting algorithms.

Let’s see this in each of our softwares!

“Interpretability”

Now we have an EBM model built. We can use some of the functionality of that model to try and interpret the variables. Remember the GAM structure to the EBM as described above. This allows for individually developed relationships for variables which makes the interpretability more similar to that of the traditional GAM’s, but with the predictive power of the more advanced machine learning boosting algorithms.

Let’s see this in each of our softwares!

Extra Pieces with Python

Variable Selection

Another thing to tune would be which variables to include in the EBM model. By default, EBM models use all the variables since they are aggregated across all the trees used to build the model. There are a couple of ways to perform variable selection for XGBoost:

  • Many permutations of including/excluding variables

  • Compare variables to random variable

The first approach is rather straight forward but time consuming because you have to try and build models over and over again taking a variable out each time. The second approach is much easier to try. In that second approach we will create a completely random variable and put it in the model. We will then look at the variable importance of all the variables. The variables that are below the random variable should be considered for removal.

To create a random variable in Python we will first create a new data.frame called X_train_r. We will then add a variable called random with the random.normal function from numpy. We then build an EBM model in the same way we did previously and check the variable importance.

Code
import numpy as np

X_train_r = X_train

X_train_r['random'] = np.random.normal(0, 1, 2051)

Instead of plotting the variable importance for the variables we will just print out a list of the importance for each variable using the term_importance and term_name functions.

Code
ebm_model_r = ExplainableBoostingRegressor(interactions = 3)
ebm_model_r.fit(X_train_r, y_train)
ExplainableBoostingRegressor(interactions=3)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Code
importances = ebm_model_r.term_importances()
names = ebm_model_r.term_names_

for (term_name, importance) in zip(names, importances):
    print(f"Term {term_name} importance: {importance}")
Term Bedroom_AbvGr importance: 4901.041451211291
Term Year_Built importance: 22997.063116381727
Term Mo_Sold importance: 1329.0039429978642
Term Lot_Area importance: 4280.288354408339
Term First_Flr_SF importance: 15515.769369838674
Term Second_Flr_SF importance: 5233.633411745021
Term Full_Bath importance: 1715.0167905473766
Term Half_Bath importance: 4084.5149261854303
Term Fireplaces importance: 8253.832541813987
Term Garage_Area importance: 7714.547082485783
Term Gr_Liv_Area importance: 13999.799248736503
Term TotRms_AbvGrd importance: 1593.4741727639848
Term Street_Grvl importance: 80.83097345516985
Term Street_Pave importance: 80.19763190781755
Term Central_Air_N importance: 1300.637422295222
Term Central_Air_Y importance: 1324.8608978376571
Term random importance: 1221.323581228066
Term Year_Built & First_Flr_SF importance: 4643.669492471417
Term Year_Built & Garage_Area importance: 2360.574748076047
Term Year_Built & Gr_Liv_Area importance: 2664.716971831796

Based on the list above there are 3 variables that fall below the variable importance of the random variable that could be considered for removal from the model.

Local Interpretability

Another valuable piece of output from the Python functionality of EBM models is the ability to perform local interpretability as well as the global interpretability we saw earlier.

Code
ebm_local = ebm_model.explain_local(X_train[:1], y_train[:1])

plotly_fig = ebm_local.visualize(0)

plotly_fig.show()

The above plot breaks down the individual prediction from the model into how each variable impacts prediction. More details on these kinds of calculations are listed in the Model Agnostic Interpretability section of the notes.

Summary

In summary, EBM models are good models to use for prediction and keeps similar interpretability of GAM models. Some of the advantages of using EBM models:

  • Early results show they are powerful at predicting (almost to the level of XGBoost).

  • Interpretation available due to GAM nature (individually estimated relationships)

There are some disadvantages though:

  • Computationally slower than random forests due to sequentially building trees

  • More tuning parameters than random forests

  • Limited implementations across all softwares