chapter three

3 Context fabric: optimizing context for AI agents

 

This chapter covers

  • Why imprecise context leads to errors in AI agents
  • The difference between prompt engineering and context engineering
  • Strategies for writing, selecting, compressing, and isolating context
  • Designing effective planning, execution, and debugging loops with AI agents
  • Proven optimization techniques for achieving high code accuracy

An LLM doesn’t read minds - it feeds on the crumbs you throw at it.

You’ve probably heard about prompt engineering, which focuses on precisely formulating instructions for language models. Unfortunately, prompt engineering alone isn’t sufficient. It leads to limitations and so-called “context failures,” which are the primary cause of AI agent breakdowns - not errors of the model itself.

The solution to this problem is context engineering. Context engineering is the discipline of designing, selecting, structuring, and maintaining the information that an AI system receives as input - its "context" - so that it can produce reliable, high-quality output. Where prompt engineering focuses on how you phrase a question, context engineering focuses on the entire information environment surrounding that question: what data the model sees, in what order, at what level of detail, and from which sources.

3.1 Vibe Coding traps: garbage in, garbage out

3.2 Context vacuum: first potential mistake

3.2.1 From a single-shot to multi-shot examples

3.2.2 Good multishot prompt vs bad multishot prompt

3.3 Building context together with LLMs

3.3.1 Using Model Context Protocol to instrument LLMs

3.3.2 Building context for a UI component with MCPs

3.3.3 Accessing external knowledge through MCPs

3.3.4 Deep integration with Language Server Protocol

3.3.5 MCP governance

3.4 Context rot: is too much context a bad thing?

3.4.1 “Lost in the middle” problem

3.4.2 Manual reordering: "Sandwich" Method

3.5 Using AI coding tools to manage context

3.5.1 Automated reordering using Retrieval-Augmented Generation (RAG)

3.5.2 Context anchor: todo list for LLM

3.5.3 Beyond compaction: meta-prompting and state externalization

3.5.4 …can I be lousy again if I’m using coding AI?

3.6 Context through reasoning

3.6.1 Chain-of-Thought: forcing the LLM to “show its work”

3.6.2 Chain-of-Verification: internal fact-checking loop

3.6.3 How to introduce self correction?

3.6.4 Is reasoning always THE solution?

3.7 Summary