chapter one

1 Introduction to AI agents and applications

 

This chapter covers

  • Core challenges in building applications powered by large language models
  • LangChain’s modular architecture and components
  • Patterns for engines, chatbots, and agents
  • Foundations of prompt engineering and Retrieval-Augmented Generation

Large language models (LLMs) such as GPT, Gemini, and Claude have moved from novelty to necessity. LLMs enable applications to answer complex questions, generate tailored content, summarize long documents, and coordinate actions across systems. More recently, LLMs have unlocked a new class of applications: AI agents. Agents take input in natural language, decide which tools or services to call, orchestrate multi-step workflows, and return results in a clear, human-friendly format.

AI applications and agent systems can be complex. They need to ingest and manage data, structure prompts, chain model calls together reliably, and integrate external APIs and services. Fortunately, frameworks such as LangChain, LangGraph, and LangSmith provide modular building blocks that eliminate boilerplate, promote best practices, and let you focus on application logic instead of low-level wiring. In this book, you’ll learn how to design, build, and scale real LLM-based applications and agents using the best tools and frameworks.

1.1 Building LLM-based applications and agents

1.1.1 LLM-based applications: Summarization and Q&A engines

1.1.2 LLM-based chatbots

1.1.3 AI agents

1.2 Introducing LangChain

1.2.1 LangChain architecture

1.2.2 LangChain’s core object model

1.3 Typical LLM use cases

1.4 How to adapt an LLM to your needs

1.4.1 Prompt engineering

1.4.2 RAG

1.4.3 Fine-tuning

1.5 Which LLMs to choose

1.6 What you’ll learn from this book

Summary