chapter four

4 Context Engineering: Optimizing context for AI Agents

 

This chapter covers

  • Why imprecise context leads to errors in AI agents
  • The difference between prompt engineering and context engineering
  • Strategies for writing, selecting, compressing, and isolating context
  • Designing effective planning, execution, and debugging loops with AI agents
  • Proven optimization techniques for achieving high code accuracy

An LLM doesn’t read minds - it feeds on the crumbs you throw at it.

You probably have heard about prompt engineering, which focuses on precisely formulating instructions for language models. Unfortunately, prompt engineering alone isn’t sufficient. It leads to limitations and so-called “context failures,” which are the primary cause of AI agent breakdowns - not errors of the model itself.

The solution to this problem is context engineering. This discipline provides the strategies for writing, selecting, compressing, and isolating context, as well as designing effective planning, execution, and debugging loops with AI agents. You will see how to apply these proven techniques for high code accuracy, using comprehensive examples with an easy-to-consume structure.

4.1 Vibe Coding traps: Garbage in, garbage out

To understand the kinds of problems we’re talking about, consider a typical scenario. A developer - let’s call him Bob (convention over configuration FTW) - is struggling with a bug in his code. His first instinct is to ask the AI for help.

Bob’s prompt:

4.2 Context Vacuum: First potential mistake

4.2.1 From a single-shot to multi-shot examples

4.2.2 Good Multishot prompt vs Bad Multishot prompt

4.3 Building context together with LLMs

4.3.1 Using Model Context Protocol to instrument LLMs

4.3.2 Building Context for UI Component with an MCPs

4.3.3 Accessing external knowledge through MCPs

4.3.4 Deep integration with Language Server Protocol

4.4 Context Rot: Is too much context a bad thing?

4.4.1 “Lost in the middle” problem

4.4.2 Manual Reordering: "Sandwich" Method

4.5 Using AI coding tools to manage context

4.5.1 Automated reordering using Retrieval-Augmented Generation (RAG)

4.5.2 Context Anchor: ToDo list for LLM

4.5.3 Context Compaction

4.5.4 …can I be lousy again if I’m using Coding AI?

4.6 Context through reasoning

4.6.1 Chain-of-Thought: forcing the LLM to “show its work”

4.6.2 Chain-of-Verification: Internal fact-checking loop

4.6.3 How to introduce self correction?

4.6.4 Is Reasoning always THE solution?

4.7 Summary