part two

Part 2: Summarization

 

This part is all about transforming too much information into something clear, concise, and useful. You’ll start by learning practical techniques for summarizing long or numerous documents—from massive reports to folders full of mixed file types—while staying within an LLM’s context limits. You’ll explore when to use strategies like map-reduce or refine, and how to connect everything with LangChain’s Expression Language (LCEL) to produce fast, accurate, and maintainable summaries.

From there, you’ll move beyond simple “summarize this text” tasks to more advanced, research-oriented applications. You’ll build a summarization engine that searches the web, collects relevant information, and composes a coherent, well-grounded report—all powered by prompt engineering and modular chains. Finally, you’ll evolve your summarization workflows into agentic systems using LangGraph, introducing explicit state management and conditional branching so your pipelines can adapt intelligently, scale smoothly, and serve as a foundation for more autonomous AI agents later in the book.