chapter one

1 Introduction to AI agents and applications

 

This chapter covers

  • Core challenges in building LLM-powered applications
  • LangChain’s modular architecture and components
  • Patterns for engines, chatbots, and agents
  • Foundations of prompt engineering and RAG

Large Language Models (LLMs) like GPT, Gemini, and Claude have moved from novelty to necessity. LLMs enable applications to answer complex questions, generate tailored content, summarize long documents, and coordinate actions across systems. More recently, LLMs have unlocked a new class of applications: AI agents. Agents take input in natural language, decide which tools or services to call, orchestrate multi-step workflows, and return results in a clear, human-friendly format.

AI applications and agent systems can be complex. The need to ingest and manage data, structure prompts, chain model calls reliably, and integrate external APIs and services. Fortunately, frameworks like LangChain, LangGraph and LangSmith provide modular building blocks that eliminate boilerplate, promote best practices, and let you focus on application logic instead of low-level wiring. In this book, you’ll learn how to design, build, and scale real LLM-based applications and agents using best of class tools and frameworks.

1.1 Building LLM-based applications and agents

1.1.1 LLM-based applications: summarization and Q&A engines

1.1.2 LLM-based chatbots

1.1.3 AI agents

1.2 Introducing LangChain

1.2.1 LangChain architecture

1.3 LangChain core object model

1.4 Typical LLM use cases

1.5 How to adapt an LLM to your needs

1.5.1 Prompt engineering

1.5.2 Retrieval Augmented Generation (RAG)

1.5.3 Fine-tuning

1.6 Which LLMs to choose

1.7 What You'll Learn from this Book

1.8 Recap on LLM terminology

1.9 Summary