chapter nine

9 Balancing utility and cost with multi-fidelity optimization

 

This chapter covers

  • The problem of multi-fidelity optimization with variable cost
  • Training a Gaussian process on data from multiple sources
  • Implementing a cost-aware multi-fidelity Bayesian optimization policy

Consider the following questions:

  • Should you trust the online reviews saying that the newest season of your favorite TV show isn’t as good as the previous ones and you should quit watching the show, or should you spend your next few weekends watching it to find out for yourself whether you will like the new season regardless?
  • After seeing her neural network model doesn’t perform well after being trained for a few epochs, should a machine learning engineer cut her losses and switch to a different model, or should she keep training for more epochs in the hope of achieving better performance?
  • When a physicist wants to understand a physical phenomenon, can she use a computer simulation to gain insights, or are real, physical experiments necessary to study the phenomenon?

9.1 Using low-fidelity approximations to study an expensive phenomenon

9.2 Multi-fidelity modeling with Gaussian processes

9.2.1 Formatting a multi-fidelity data set

9.2.2 Training a multi-fidelity Gaussian process

9.3 Balancing information and cost in multi-fidelity optimization

9.3.1 Modeling the costs of querying different fidelities

9.3.2 Optimizing the amount of information per dollar to guide optimization

9.4 Measuring performance in multi-fidelity optimization

9.5 Summary

9.6 Exercise 1: Visualizing average performance in multi-fidelity optimization

9.7 Exercise 2: Multi-fidelity optimization with multiple low-fidelity approximations