chapter seven

7 Creating LLM-based applications using LangChain and LlamaIndex

 

This chapter covers

  • Introducing large language models (LLMs)
  • Creating LLM applications using LangChain
  • Connecting LLMs to your private data

In chapter 3, you learned how to employ a transformers pipeline to access diverse pretrained models for various natural language processing (NLP) tasks, including sentiment classification, named entity extraction, and text summarization. In practical scenarios, however, the goal is to seamlessly integrate various models, encompassing those from Hugging Face and OpenAI, into custom applications. Enter LangChain, a solution that facilitates the customization of NLP applications by linking different components based on specific requirements.

Although pretrained models are beneficial, it’s important to note that they were trained on external data, not your own. Often, you need to use a model that answers questions pertinent to your unique dataset. Imagine possessing a dataset with numerous receipts and invoices. You would want a pretrained model to summarize your purchases or identify vendors associated with specific items. LlamaIndex is indispensable for this task. With LlamaIndex, you gain the ability to connect an LLM to your proprietary data, empowering it to address queries tailored to your dataset.

7.1 Introducing LLMs

7.2 Introducing LangChain

7.2.1 Installing LangChain

7.2.2 Creating a prompt template

7.2.3 Specifying an LLM

7.2.4 Creating an LLM chain

7.2.5 Running the chain

7.2.6 Maintaining a conversation

7.2.7 Using the RunnableWithMessageHistory class

7.2.8 Using other LLMs

7.3 Connecting LLMs to your private data using LlamaIndex

7.3.1 Installing the packages

7.3.2 Preparing the documents

7.3.3 Loading the documents

7.3.4 Using an embedding model

7.3.5 Indexing the document

7.3.6 Loading the embeddings

7.3.7 Using an LLM for querying

7.3.8 Asking questions

7.3.9 Using LlamaIndex with OpenAI