9 Reprogram an LLM for forecasting
This chapter covers
- Discovering the architecture of Time-LLM
- Forecasting with Time-LLM
- Applying Time-LLM for anomaly detection
In the previous chapter, we applied large language models directly on forecasting tasks. While it is definitely possible to make forecasts and detect anomalies using LLMs, they are still ill-suited for time series forecasting as they were not specifically trained for this type of task.
To overcome this hurdle, researchers have proposed Time-LLM which is a framework that reprograms existing large language models for time series forecasting.
Therefore, Time-LLM is not a foundation model itself, but rather a tool that allows us to take off-the-shelf LLMs and repurpose them for time series forecasting. Also, as we will see in the next section of this chapter, Time-LLM is effectively a multimodal model, as we can feed both historical time series data and a textual prompt to give context about our time series.
Multimodality
A model is termed multimodal when it supports different types of data. For example, if we can feed an image and text to a model to get a certain output, then it is a multimodal model. Time-LLM enables multimodality as we can feed both time series data and a textual prompt to obtain forecasts.
As we will see, this model is especially useful when we want to enrich our forecasts with contextual data and when we have enough computing resources to reprogram an LLM.