In chapter 1, we introduced ensemble learning and created our first rudimentary ensemble. To recap, an ensemble method relies on the notion of “wisdom of the crowd”: the combined answer of many models is often better than any one individual answer. We begin our journey into ensemble learning methods in earnest with parallel ensemble methods. We begin with this type of ensemble method because, conceptually, parallel ensemble methods are easy to understand and implement.
Parallel ensemble methods, as the name suggests, train each component base estimator independently of the others, which means that they can be trained in parallel. As we’ll see, parallel ensemble methods can be further distinguished as homogeneous and heterogeneous parallel ensembles depending on the kind of learning algorithms they use.