Gaussian processes (GPs), outside of the context of BayesOpt, are a powerful class of ML models in their own right. While the main topic of this book is BayesOpt, it would be a missed opportunity not to give GPs more attention. This part shows us how to extend GPs and make them more practical in various ML tasks, while retaining their most valuable feature: the quantification of uncertainty in the predictions.
In chapter 12, we learn how to accelerate the training of GPs and scale them to large data sets. This chapter helps us address one of the biggest disadvantages of GPs: their training cost.
Chapter 13 shows how to take the GP’s flexibility to another level by combining them with neural networks. This combination offers the best of both worlds: the ability of neural networks to approximate any function and the quantification of uncertainty by GPs. This chapter is also where we can truly appreciate having a streamlined software ecosystem in PyTorch, GPyTorch, and BoTorch, which makes working with neural networks and GPs together seamless.