7 Forecasting non-stationary time series

 

This chapter covers

  • Examining the autoregressive integrated moving average model, or ARIMA(p,d,q)
  • Applying the general modeling procedure for non-stationary time series
  • Forecasting using the ARIMA(p,d,q) model

In chapters 4, 5, and 6 we covered the moving average model, MA(q)); the autoregressive model, AR(p)); and the ARMA model, ARMA(p,q). We saw how these models can only be used for stationary time series, which required us to apply transformations, mainly differencing, and test for stationarity using the ADF test. In the examples that we covered, the forecasts from each model returned differenced values, which required us to reverse this transformation in order to bring the values back to the scale of the original data.

Now we’ll add another component to the ARMA(p,q) model so we can forecast non-stationary time series. This component is the integration order, which is denoted by the variable d. This leads us to the autoregressive integrated moving average (ARIMA) model, or ARIMA(p,d,q). Using this model, we can take into account non-stationary time series and avoid the steps of modeling on differenced data and having to inverse transform the forecasts.

7.1 Defining the autoregressive integrated moving average model

7.2 Modifying the general modeling procedure to account for non-stationary series

7.3 Forecasting a non-stationary times series

7.4 Next steps

7.5 Exercises

7.5.1 Apply the ARIMA(p,d,q) model on the datasets from chapters 4, 5, and 6

Summary