The BayesOpt loop we have learned represents a wide range of optimization problems. However, real-life scenarios often don’t follow this highly idealized model. What if you can run multiple function evaluations at the same time, which is common in hyperparameter tuning applications where multiple GPUs are available? What if you have multiple, competing objectives you’d like to optimize for? This part presents some of the most common optimization scenarios you might encounter in the real world and discusses how to extend BayesOpt to these settings.
To increase throughput, many settings allow experiments to run in parallel. Chapter 7 introduces the batch BayesOpt framework, in which function evaluations are made in batches. We learn how to extend the decision-making policies we have learned in part 2 to this setting, while ensuring we fully take advantage of the parallelism of the system.
In safety-critical use cases, we cannot explore the search space freely, as some function evaluations may have detrimental effects. This motivates the setting where there are constraints on how the function in question should behave and there is a need to factor in these constraints in the design of optimization policies. Chapter 8 deals with this setting, called constrained optimization, and develops the necessary machinery to apply BayesOpt.