7 Creating LLM-based applications using LangChain and LlamaIndex
This chapter covers
- Introducing large language models (LLMs)
 - Creating LLM applications using LangChain
 - Connecting LLMs to your private data
 
In chapter 3, you learned how to employ a transformers pipeline to access diverse pretrained models for various natural language processing (NLP) tasks, including sentiment classification, named entity extraction, and text summarization. In practical scenarios, however, the goal is to seamlessly integrate various models, encompassing those from Hugging Face and OpenAI, into custom applications. Enter LangChain, a solution that facilitates the customization of NLP applications by linking different components based on specific requirements.
Although pretrained models are beneficial, it’s important to note that they were trained on external data, not your own. Often, you need to use a model that answers questions pertinent to your unique dataset. Imagine possessing a dataset with numerous receipts and invoices. You would want a pretrained model to summarize your purchases or identify vendors associated with specific items. LlamaIndex is indispensable for this task. With LlamaIndex, you gain the ability to connect an LLM to your proprietary data, empowering it to address queries tailored to your dataset.