In the previous two chapters, we saw two approaches to constructing sequential ensembles: In chapter 4, we introduced a new ensemble method called adaptive boosting (AdaBoost), which uses weights to identify the most misclassified examples. In chapter 5, we introduced another ensemble method called gradient boosting, which uses gradients (residuals) to identify the most misclassified examples. The fundamental intuition behind both of these boosting methods is to target the most misclassified (essentially, the worst behaving) examples at every iteration to improve classification.