The ensembling strategies we’ve seen thus far have been parallel ensembles. These include homogeneous ensembles such as bagging and random forests (where the same base-learning algorithm is used to train base estimators) and heterogeneous ensemble methods such as stacking (where different base-learning algorithms are used to train base estimators).
Now, we’ll explore a new family of ensemble methods: sequential ensembles. Unlike parallel ensembles, which exploit the independence of each base estimator, sequential ensembles exploit the dependence of base estimators. More specifically, during learning, sequential ensembles train a new base estimator in such a manner that it minimizes mistakes made by the base estimator trained in the previous step.