8 Creating LLM-based Applications using LangChain and LlamaIndex
This chapter covers:
- Introducing Large Language Models (LLM)
- Creating LLM Applications using LangChain
- Connecting LLMs to Your Private Data using LlamaIndex
- Running Local Models for Vector Embedding
In chapter 3, you learned how to employ transformers pipeline to access diverse pre-trained models for various Natural Language Processing (NLP) tasks, including sentiment classification, named entity extraction, and text summarization. However, in practical scenarios, the goal is to seamlessly integrate various models, encompassing those from Hugging Face and OpenAI, into custom applications. Enter LangChain, a solution that facilitates the customization of NLP applications by linking different components based on specific requirements.
While pre-trained models prove beneficial, it's important to note that they were trained on external data, not your own. Often, there arises a requirement to utilize a model for answering questions pertinent to your unique dataset. For instance, imagine possessing a dataset with numerous receipts and invoices. Leveraging a pre-trained model, you would want it to summarize your purchases or identify vendors associated with specific items. This is where LlamaIndex becomes indispensable. With LlamaIndex, you gain the ability to connect an LLM to your proprietary data, empowering it to address queries tailored to your dataset.