chapter twelve
12 Combining models to maximize results: Ensemble learning
This chapter covers
- What is ensemble learning.
- Joining several weak classifiers to form a strong classifier.
- Using bagging to join several classifiers together in a random way and obtain better results.
- Using boosting to join classifiers in a smarter way.
- Using AdaBoost to join several decision trees of depth 1.
After learning many interesting and very useful machine learning classifiers, a good question to ask is “Is there a way to combine them?”. Thankfully the answer is yes! In this chapter we learn several ways to build stronger classifiers by combining weaker ones. The methods we learn in this chapter are bagging and boosting. In a nutshell, bagging consists of constructing a few classifiers in a random way and putting them together. Boosting, on the other hand, consists of building these models in a smarter way, by picking each model strategically to focus on the previous models’ mistakes. One of the most popular examples of boosting is the AdaBoost algorithm (Adaptive Boosting), which we study at the end of the chapter.