1 Developing LLM applications with LangChain

 

This chapter covers

  • LangChain’s architecture and object model
  • Most common LLM applications
  • Background on LLMs

If you're planning to build an LLM application, you might consider starting from scratch with your preferred programming language and development environment. However, most LLM applications follow similar workflows and share common technical foundations, making it more efficient to use a specialized framework. This approach not only streamlines development but also helps you adopt best practices and avoid common pitfalls. In this book, you'll learn to build LLM applications using the LangChain framework.

LangChain provides a comprehensive set of tools that simplify the process of building, testing, and deploying LLM applications. It abstracts key components like text loaders, vector stores, and LLMs, and integrates seamlessly with over 600 third-party providers. This open-source toolkit enables you to access data sources, manage complex workflows, and allow LLMs to interact with external tools, significantly accelerating development and enhancing functionality. LangChain also includes LangSmith for debugging, testing, and monitoring applications, and LangGraph for building stateful, multi-agent systems.

In this chapter, you'll explore LLMs, LLM-based applications, and familiarize yourself with LangChain's architecture and object model. You'll also dive into coding right away. Let's get started!

1.1 Introducing LangChain

1.1.1 LangChain architecture

1.2 LangChain core object model

1.3 Building LLM applications

1.3.1 LLM-based engines: summarization and Q&A engines

1.3.2 LLM-based chatbots

1.3.3 LLM-based autonomous agents

1.4 Trying out LangChain in a Jupyter Notebook environment

1.4.1 Sentence completion example

1.4.2 Prompt engineering examples

1.4.3 Creating chains and executing them with LCEL

1.5 What is a Large Language Model?

1.6 Typical LLM use cases

1.7 How to adapt an LLM to your needs

1.7.1 Prompt engineering

1.7.2 Retrieval Augmented Generation (RAG)

1.7.3 Fine-tuning

1.8 Which LLMs to chose

1.9 What You'll Learn from this Book

1.10 Summary