chapter nine

9 Time series and state-space models: Evolving a Bayesian belief over time

 

This chapter covers

  • The state space representation
  • Filters and the predict–update cycle
  • The Kalman filter

So far, we have treated any learning and modeling problem as if it happens in a one-shot manner, where we start out with a prior belief, and after observing some data, we update that belief to a posterior and end the inference. Outside of the context of Bayesian probability, this is also often the case when we build predictive models in practice: we initialize the parameters of the model, train for a while, and then evaluate the result (test accuracy, the loss curve, the confusion matrix, …) as the final step.

However, in many cases, learning is an iterative procedure where we continually generate predictions, observe the ground truth from the real world, update our belief, and repeat the process with new data. In these scenarios, we often work with time series, which are sequences of data ordered by time. Instead of thinking about what the model believes after seeing all the data (because sometimes data aren’t readily available in its entirety, but are sequentially revealed like time series), we need to account for how the model changes as information arrives. Instead of asking “What does the model predict at the end?” we ask “What does the model predict at each step along the way?”

Setting up: Yearly Mauna Loa CO₂ emission

The state space formulation

Filters as algorithms for time series

Bayesian filters

The random walk model

The state transition model

The observation model

The predict–update cycle

A filter in action

Controlling filtering behavior

Multivariate filters

Actively modeling changes in CO₂ concentration

Multivariate filters in action

Advanced filters and real-world applications

Summary