3 Context fabric: optimizing context for AI agents
This chapter covers
- Why imprecise context leads to errors in AI agents
- The difference between prompt engineering and context engineering
- Strategies for writing, selecting, compressing, and isolating context
- Designing effective planning, execution, and debugging loops with AI agents
- Proven optimization techniques for achieving high code accuracy
An LLM doesn’t read minds - it feeds on the crumbs you throw at it.
You’ve probably heard about prompt engineering, which focuses on precisely formulating instructions for language models. Unfortunately, prompt engineering alone isn’t sufficient. It leads to limitations and so-called “context failures,” which are the primary cause of AI agent breakdowns - not errors of the model itself.
The solution to this problem is context engineering. Context engineering is the discipline of designing, selecting, structuring, and maintaining the information that an AI system receives as input - its "context" - so that it can produce reliable, high-quality output. Where prompt engineering focuses on how you phrase a question, context engineering focuses on the entire information environment surrounding that question: what data the model sees, in what order, at what level of detail, and from which sources.