This chapter covers
- Discovering the architecture of Time-LLM
- Forecasting with Time-LLM
- Applying Time-LLM to anomaly detection
In chapter 8, we applied large language models (LLMs) directly to forecasting tasks. Although it’s possible to use LLMs to make forecasts and detect anomalies, they’re still ill suited to time-series forecasting because they were not specifically trained for this type of task. To overcome this hurdle, researchers have proposed Time-LLM, a framework that reprograms existing large language models for time-series forecasting [1].
Time-LLM is not a foundation model but a tool that allows us to repurpose off-the-shelf LLMs for time-series forecasting. As we’ll see in this chapter, Time-LLM is effectively a multimodal model; we can feed it both historical time-series data and a textual prompt to provide context about our time series and obtain forecasts.
A model is multimodal when it supports different types of data. If we can feed an image and text to a model to get a certain output, for example, that model is multimodal. This type of model is especially useful when we want to enrich our forecasts with contextual data and have enough computing resources to reprogram an LLM.