chapter one

1 Developing LLM applications with LangChain

 

This chapter covers

  • LangChain’s architecture and object model
  • Most common LLM applications
  • LangChain programming examples
  • Background on LLMs

An LLM application uses a large language model to solve real-world problems such as answering questions, generating content, summarizing text, or interacting with other systems. If you're planning to build one, you might consider starting from scratch in your preferred programming language. However, most LLM applications follow similar workflows and design patterns, so using a specialized framework is often far more efficient. It accelerates development, encourages best practices, and helps you avoid common pitfalls.

In this book, you'll learn how to build LLM applications using the LangChain framework—a powerful open-source toolkit designed to simplify the development, testing, and deployment of AI-powered solutions. LangChain abstracts core components like text loaders, vector stores, and language models, and integrates with over 1,000 third-party services. This enables seamless access to data, orchestration of complex workflows, and easy integration with external tools—dramatically accelerating development and expanding what your applications can do.

1.1 Introducing LangChain

1.1.1 LangChain architecture

1.2 LangChain core object model

1.3 Building LLM applications

1.3.1 LLM-based engines: summarization and Q&A engines

1.3.2 LLM-based chatbots

1.3.3 LLM-based autonomous agents

1.4 Trying out LangChain in a Jupyter Notebook environment

1.4.1 Sentence completion example

1.4.2 Prompt engineering examples

1.4.3 Creating chains and executing them with LCEL

1.5 What is a Large Language Model?

1.6 Typical LLM use cases

1.7 How to adapt an LLM to your needs

1.7.1 Prompt engineering

1.7.2 Retrieval Augmented Generation (RAG)

1.7.3 Fine-tuning

1.8 Which LLMs to chose

1.9 What You'll Learn from this Book

1.10 Recap on LLM terminology

1.11 Summary