This chapter covers
- Framing a forecasting problem as a language task
- Forecasting with large language models
- Cross-validating with LLMs
- Detecting anomalies with LLMs
In previous chapters, we discovered and experimented with many large time models that are specifically built for time-series forecasting. Still, as highlighted by the researchers of Chronos, predicting the next value of a time series is analogous to predicting the next word in a sentence. Although Chronos is a framework for retraining existing LLMs for forecasting, in this chapter, we experiment with prompting LLMs to solve forecasting tasks.
This approach has already been studied and named PromptCast [1]. The idea is simple: turn a numerical prediction task into a natural language task that LLMs can understand.
Framing a forecasting problem as a language task involves processing the input and the output. First, the values of the input series must be formatted as a prompt. Then we feed this prompt to the language model, which also outputs a string. This string must be processed to extract the predictions. Thus, we should use these models only if we already have access to an LLM, need a natural language interface, and know how to construct robust prompts to guide the model.