7 Experimentation in action: Moving from prototype to MVP

 

This chapter covers

  • Techniques for hyperparameter tuning and the benefits of automated approaches
  • Execution options for improving the performance of hyperparameter optimization

In the preceding chapter, we explored the scenario of testing and evaluating potential solutions to a business problem focused on forecasting passengers at airports. We ended up arriving at a decision on the model to use for the implementation (Holt-Winters exponential smoothing) but performed only a modicum of model tuning during the rapid prototyping phases.

Moving from experimental prototyping to MVP development is challenging. It requires a complete cognitive shift that is at odds with the work done up to this point. We’re no longer thinking of how to solve a problem and get a good result. Instead, we’re thinking of how to build a solution that is good enough to solve the problem in a way that is robust enough so that it’s not breaking constantly. We need to shift focus to monitoring, automated tuning, scalability, and cost. We’re moving from scientific-focused work to the realm of engineering.

The first priority when moving from prototype to MVP is ensuring that a solution is tuned correctly. See the following sidebar for additional details on why it’s so critical to tune models and how these seemingly optional settings in modeling APIs are actually important to test.

7.1 Tuning: Automating the annoying stuff

7.1.1 Tuning options

7.1.2 Hyperopt primer

7.1.3 Using Hyperopt to tune a complex forecasting problem

7.2 Choosing the right tech for the platform and the team

7.2.1 Why Spark?

7.2.2 Handling tuning from the driver with SparkTrials

7.2.3 Handling tuning from the workers with a pandas_udf

7.2.4 Using new paradigms for teams: Platforms and technologies

Summary