In the preceding chapter, we arrived at a solution to one of the most time-consuming and monotonous tasks that we face as ML practitioners: fine-tuning models. By having techniques to solve the tedious act of tuning, we can greatly reduce the risk of producing ML-backed solutions that are inaccurate to the point of being worthless. In the process of applying those techniques, however, we quietly welcomed an enormous elephant into the room of our project: tracking.
Throughout the last several chapters, we have been required to retrain our time-series models each time that we do inference. For the vast majority of other supervised learning tasks, this won’t be the case. Those other applications of modeling, both supervised and unsupervised, will have periodic retraining events, between which each model will be called for inference (prediction) many times.