9 Balancing utility and cost with multifidelity optimization

 

This chapter covers

  • The problem of multifidelity optimization with variable cost
  • Training a GP on data from multiple sources
  • Implementing a cost-aware multifidelity BayesOpt policy

Consider the following questions:

  • Should you trust the online reviews saying that the newest season of your favorite TV show isn’t as good as the previous ones and you should quit watching the show, or should you spend your next few weekends watching it to find out for yourself whether you will like the new season?
  • After seeing that their neural network model doesn’t perform well after being trained for a few epochs, should an ML engineer cut their losses and switch to a different model, or should they keep training for more epochs in the hope of achieving better performance?
  • When a physicist wants to understand a physical phenomenon, can they use a computer simulation to gain insights, or are real, physical experiments necessary to study the phenomenon?

These questions are similar in that they demand that the person in question choose between two possible actions that can help them answer a question they’re interested in. On one hand, the person can take an action with a relatively low cost, but the answer generated from the action might be corrupted by noise and, therefore, not necessarily true. On the other hand, the person can opt for the action with a higher cost, which will help them arrive at a more definite conclusion:

9.1 Using low-fidelity approximations to study expensive phenomena

9.2 Multifidelity modeling with GPs

9.2.1 Formatting a multifidelity dataset

9.2.2 Training a multifidelity GP

9.3 Balancing information and cost in multifidelity optimization

9.3.1 Modeling the costs of querying different fidelities

9.3.2 Optimizing the amount of information per dollar to guide optimization

9.4 Measuring performance in multifidelity optimization

9.5 Exercise 1: Visualizing average performance in multifidelity optimization

9.6 Exercise 2: Multifidelity optimization with multiple low-fidelity approximations

Summary