chapter seven

7 Maximizing throughput with batch optimization

 

This chapter covers

  • Making function evaluations in batches
  • Extending Bayesian optimization to the batch setting
  • Optimizing hard-to-compute acquisition scores

The Bayesian optimization (BayesOpt) loop we have been working with thus far takes in one query at a time, and returns the function evaluation for the query before the next query is made. We use this loop in settings where function evaluations can only be made sequentially. However, many real-world scenarios of black-box optimization allow the user to evaluate the objective functions in batches. For example, when tuning the hyperparameters of a machine learning model, we can try out different hyperparameter combinations in parallel if we have access to multiple processing units or computers, instead of running individual combinations one by one. By taking advantage of all the resources available to us, we can increase the number of experiments we conduct and maximize throughput during the function evaluation step of the BayesOpt loop.

7.1 Making multiple function evaluations simultaneously

7.1.1 Making use of all available resources in parallel

7.1.2 Why can’t we use regular Bayesian optimization policies in the batch setting?

7.2 Computing the improvement and the upper confidence bound of a batch of points

7.2.1 Extending optimization heuristics to the batch setting

7.2.2 Implementing batch improvement and upper confidence bound policies

7.3 Exercise 1: Extending Thompson sampling to the batch setting via resampling

7.4 Computing the value of a batch of points using information theory

7.4.1 Finding the most informative batch of points with cyclic refinement

7.4.2 Implementing batch entropy search with BoTorch

7.5 Summary

7.6 Exercise 2: Optimizing airplane designs