1 Introduction to AI Agents and Applications

 

This chapter covers

  • Core challenges in building LLM-powered applications
  • LangChain’s modular architecture and components
  • Patterns for engines, chatbots, and agents
  • Foundations of prompt engineering and RAG

After the launch of ChatGPT in late 2022, developers quickly began experimenting with applications powered by large language models (LLMs). Since then, LLMs have moved from novelty to necessity, becoming a new staple in the application development toolbox—much like databases or web interfaces in earlier eras. They give applications the ability to answer complex questions, generate tailored content, summarize long documents, and coordinate actions across systems. More importantly, LLMs have unlocked a new class of applications: AI agents. Agents take input in natural language, decide which tools or services to call, orchestrate multi-step workflows, and return results in a clear, human-friendly format.

1.1 Introducing LangChain

1.1.1 LangChain architecture

1.2 LangChain core object model

1.3 Building LLM applications and AI agents

1.3.1 LLM-based applications: summarization and Q&A engines

1.3.2 LLM-based chatbots

1.3.3 AI agents

1.4 Typical LLM use cases

1.5 How to adapt an LLM to your needs

1.5.1 Prompt engineering

1.5.2 Retrieval Augmented Generation (RAG)

1.5.3 Fine-tuning

1.6 Which LLMs to choose

1.7 What You'll Learn from this Book

1.8 Recap on LLM terminology

1.9 Summary