chapter eight

8 Forecasting as a language task

 

This chapter covers

  • Framing a forecasting problem as a language task
  • Forecasting with large language models
  • Cross-validating with LLMs
  • Detecting anomalies with LLMs

In previous chapters, we discovered and experimented with many large time models that are specifically built for time-series forecasting. Still, as highlighted by the researchers of Chronos, predicting the next value of a time series is analogous to predicting the next word in a sentence. Although Chronos is a framework for retraining existing LLMs for forecasting, in this chapter, we experiment with prompting LLMs to solve forecasting tasks.

This approach has already been studied and named PromptCast [1]. The idea is simple: turn a numerical prediction task into a natural language task that LLMs can understand.

Framing a forecasting problem as a language task involves processing the input and the output. First, the values of the input series must be formatted as a prompt. Then we feed this prompt to the language model, which also outputs a string. This string must be processed to extract the predictions. Thus, we should use these models only if we already have access to an LLM, need a natural language interface, and know how to construct robust prompts to guide the model.

8.1 Overview of LLMs and prompting techniques

8.1.1 Exploring Flan-T5 and Llama-3.2

8.1.2 Understanding the basics of prompting

8.2 Forecasting with Flan-T5

8.2.1 Function to forecast with Flan-T5

8.2.2 Forecast with Flan-T5

8.3 Cross-validation with Flan-T5

8.3.1 Running cross-validation

8.3.2 Evaluating Flan-T5

8.4 Forecasting with exogenous features with Flan-T5

8.4.1 Including exogenous features with Flan-T5

8.4.2 Extracting future values of exogenous variables

8.4.3 Cross-validating with external features

8.4.4 Evaluating Flan-T5 forecasts with exogenous features

8.5 Detecting anomalies with Flan-T5

8.5.1 Defining a function for anomaly detection with Flan-T5

8.5.2 Running anomaly detection

8.6 Forecasting with Llama-3.2