chapter one

1 Understanding reasoning models

 

This chapter covers

  • What "reasoning" means specifically in the context of LLMs
  • How reasoning differs from pattern matching
  • The conventional pre-training and post-training stages of LLMs
  • Key approaches to improving reasoning abilities in LLMs
  • Why building reasoning models from scratch can improve our understanding of their strengths, limitations, and practical trade-offs

Welcome to the next stage of large language models (LLMs): reasoning. LLMs have transformed how we process and generate text, but their success has been largely driven by statistical pattern recognition. However, new advances in reasoning methodologies now enable LLMs to tackle more complex tasks, such as solving logical puzzles and advanced math problems involving multi-step arithmetic. Importantly, reasoning is not just an academic pursuit, but it is also an essential technique for making "agentic" AI practical. Understanding reasoning methodologies is the central focus of this book.

In Build a Reasoning Model (From Scratch), you will learn the inner workings of LLM reasoning methods through a hands-on, code-first approach. We will start from a pre-trained LLM and extend it step by step with reasoning capabilities. We implement these reasoning components ourselves, from scratch, to see how these methods work in practice.

1.1 Defining reasoning in the context of LLMs

1.2 Understanding the standard LLM training pipeline

1.3 Modeling language through pattern matching

1.4 Simulating reasoning without explicit rules

1.5 Improving reasoning with training and inference techniques

1.6 Why build reasoning models from scratch?

1.7 A roadmap to reasoning models from scratch

1.8 Summary