8 Forecasting as a language task

 

This chapter covers

  • Framing a forecasting problem as a language task
  • Forecasting with large language models (LLMs)
  • Cross-validation with LLMs
  • Anomaly detection with LLMs

In previous chapters, we have discovered and experimented with many large time models that are specifically built for time series forecasting. Still, as highlighted by researchers of Chronos, predicting the next value of a time series is analogous to predicting the next word in a sentence. While Chronos is a framework for retraining existing LLMs for forecasting, here we experiment with prompting LLMs to solve forecasting tasks.

This approach was already studied and named PromptCast [1]. The idea is very simple: turn a numerical prediction task to a natural language task that LLMs can understand as illustrated in figure 8.1.

Figure 8.1 General steps of forecasting with large language models. The values of the input sequence must be formatted as a prompt. This prompt is sent to the LLM which outputs a string. The string must then be processed to extract the future values.

We can see that framing a forecasting problem as a language task involves processing the input and the output. First, the values of the input series must be formatted as a prompt. Then, we feed this prompt to the language model, which also outputs a string. This string must then be processed to extract the predictions.

8.1 Quick overview of LLMs and prompting techniques

8.1.1 Exploring Flan-T5 and Llama-3.2

8.1.2 Understand the basics of prompting

8.2 Forecasting with Flan-T5

8.2.1 Cross-validation with Flan-T5

8.2.2 Forecasting with exogenous features with Flan-T5

8.2.3 Anomaly detection with Flan-T5

8.3 Forecasting with Llama-3.2

8.3.1 Cross-validation with Llama-3.2

8.3.2 Anomaly detection with Llama-3.2

8.4 Next steps

8.5 Summary