1 Understanding large language models

 

This chapter covers

  • High-level explanations of the fundamental concepts behind large language models (LLMs)
  • Insights into the transformer architecture from which LLMs are derived
  • A plan for building an LLM from scratch

Large language models (LLMs), such as those offered in OpenAI’s ChatGPT, are deep neural network models that have been developed over the past few years. They ushered in a new era for natural language processing (NLP). Before the advent of LLMs, traditional methods excelled at categorization tasks such as email spam classification and straightforward pattern recognition that could be captured with handcrafted rules or simpler models. However, they typically underperformed in language tasks that demanded complex understanding and generation abilities, such as parsing detailed instructions, conducting contextual analysis, and creating coherent and contextually appropriate original text. For example, previous generations of language models could not write an email from a list of keywords—a task that is trivial for contemporary LLMs.

LLMs have remarkable capabilities to understand, generate, and interpret human language. However, it’s important to clarify that when we say language models “understand,” we mean that they can process and generate text in ways that appear coherent and contextually relevant, not that they possess human-like consciousness or comprehension.

1.1 What is an LLM?

1.2 Applications of LLMs

1.3 Stages of building and using LLMs

1.4 Introducing the transformer architecture

1.5 Utilizing large datasets

1.6 A closer look at the GPT architecture

1.7 Building a large language model

Summary

sitemap