front matter

 

preface

Once upon a time, I was a graduate student, adrift and rudderless in an ocean of unfulfilling research directions and uncertain futures. Then I stumbled upon a remarkable article titled “Support Vector Machines: Hype or Hallelujah?” This being the early 2000s, support vector machines (SVMs) were, of course, the preeminent machine-learning technique of the time.

In the article, the authors (one of whom would later become my PhD advisor) took a rather reductionist approach to explaining the considerably complex topic of SVMs, interleaving intuition and geometry with theory and application. The article made a powerful impression on me, at once igniting a lifelong fascination with machine learning and an obsession with understanding how such methods work under the hood. Indeed, the title of the first chapter pays homage to that paper that had so profound an influence over my life.

Much like SVMs then, ensemble methods are widely considered a preeminent machine-learning technique today. But what many people don’t realize is that some ensemble method or another has always been considered state of the art over the decades: bagging in the 1990s, random forests and boosting in the 2000s, gradient boosting in the 2010s, and XGBoost in the 2020s. In the ever-mutable world of the best machine-learning models, ensemble methods, it seems, are indeed worth the hype.

acknowledgments

about this book

Who should read this book

How this book is organized: A road map

About the code

liveBook discussion forum

about the author

about the cover illustration