preface

preface

 

Though it hardly seems possible, in the two years since the release of the first edition of Introduction to Generative AI, generative artificial intelligence has only grown in the public consciousness. Large language models (LLMs) were once a development exciting mostly to developers of natural language processing applications, like the two of us; now, the release of new models is covered breathlessly in the tech press. LLMs and multimodal models have already transformed the creation of text, images, audio, and video, and each passing day brings new applications that test the limits of AI capabilities.

In this second edition, we again attempt to build an understanding of how LLMs are trained, the data they are trained on, and the algorithms that contribute to their final output, increasingly indistinguishable from what a human might produce. We have added new material on reasoning models and AI agents, among other updates that reflect the state of the industry today. But rather than uncritically reporting on these developments, we highlight their nuances and implications in addition to their fascinating technical foundations.