1 Introduction to Bayesian optimization

 

This chapter covers

  • What motivates Bayesian optimization and how it works
  • Real-life examples of Bayesian optimization problems
  • A toy example of Bayesian optimization in action

You’ve made a wonderful choice in reading this book, and I’m excited for your upcoming journey! On a high level, Bayesian optimization is an optimization technique that may be applied when the function (or, in general, any process that generates an output when an input is passed in) one is trying to optimize is a black box and expensive to evaluate in terms of time, money, or other resources. This setup encompasses many important tasks, including hyperparameter tuning, which we define shortly. Using Bayesian optimization can accelerate this search procedure and help us locate the optimum of the function as quickly as possible.

While Bayesian optimization has enjoyed enduring interest from the machine learning (ML) research community, it’s not as commonly used or talked about as other ML topics in practice. But why? Some might say Bayesian optimization has a steep learning curve: one needs to understand calculus, use some probability, and be an overall experienced ML researcher to use Bayesian optimization in an application. The goal of this book is to dispel the idea that Bayesian optimization is difficult to use and show that the technology is more intuitive and accessible than one would think.

1.1 Finding the optimum of an expensive black box function

1.1.1 Hyperparameter tuning as an example of an expensive black box optimization problem

1.1.2 The problem of expensive black box optimization

1.1.3 Other real-world examples of expensive black box optimization problems

1.2 Introducing Bayesian optimization

1.2.1 Modeling with a Gaussian process

1.2.2 Making decisions with a BayesOpt policy

1.2.3 Combining the GP and the optimization policy to form the optimization loop

1.2.4 BayesOpt in action

1.3 What will you learn in this book?

Summary

sitemap