preface

preface

 

In October 2023, I used TimeGPT, one of the foundation forecasting models that we explore in this book, for the first time. After running it for a project, I found that it made better predictions than the models I’d carefully built and tuned on my data.

That’s when I knew that large time models were about to change the field of time-series forecasting. A pretrained model not only performed better than my own but also was much faster and more convenient. This is the ultimate promise of foundation models: a single model enables you to deliver state-of-the-art forecasting performance without the hassle of training a model from scratch or maintaining multiple models for each use case.

Since then, many models have been proposed and developed, and a big shift has occurred in the scientific community, where a great deal of effort is now spent building better foundation forecasting models. Just as data professionals are expected to know large language models (LLMs), I anticipate that large time models will be must-know technology for practitioners, so I set out to write a book to bring readers up to speed.

This book explores the major contributions to large time models. It can’t cover all that has been done or anticipate all that will happen next, of course, but it will enable you to use and optimize current large models. I included the most recent modifications to methods covered in the book to ensure that what you read is as up to date as possible.