Part 2 Essential ensemble methods

 

This part of the book will introduce several “essential” ensemble methods. In each chapter you’ll learn how to (1) implement a basic version of an ensemble method from scratch to gain an under-the-hood understanding; (2) visualize, step-by-step, how learning actually takes place; and (3) use sophisticated, off-the-shelf implementations to ultimately get the best out of your models.

Chapters 2 and 3 cover different types of well-known parallel ensemble methods, including bagging, random forests, stacking, and their variants. Chapter 4 introduces a fundamental sequential ensembling technique called boosting, as well as another well-known ensemble method called AdaBoost (and its variants).

Chapters 5 and 6 are all about gradient boosting, the ensembling technique that is all the rage at the time of this writing and is widely considered state-of-the-art. Chapter 5 covers the fundamentals and inner workings of gradient-boosting. You’ll also learn how to get started with LightGBM, a powerful gradient-boosting framework with which you can build scalable and effective gradient boosting applications. Chapter 6 covers an important variant of gradient boosting called Newton boosting. You’ll also learn how to get started with XGBoost, another well-known and powerful gradient-boosting framework.