12 Using LLMs to Query Your Local Data
This chapter covers
- Using GPT4All to query your own private data
- Loading PDF documents for querying by a LLM
- Preparing PDF documents for embedding
- Using a model by GPT4All to answer questions on your own PDF document
- Loading CSV and JSON files for querying
- Using LLMs to analyze your own data files
Up to this point, you've explored the capabilities of LLMs and their usage through platforms like OpenAI and Hugging Face. While these services ease the burden of hosting models, they come at a cost. Alternatively, running powerful models locally requires significant setup effort and cost.
Developers often face the common challenge of utilizing LLMs to answer questions about their data, while businesses emphasize the need to maintain data privacy. In Chapter 8, we discussed sending data to OpenAI for embedding and querying with LangChain and LlamaIndex.
In this chapter, we will delve deeper into the topic, focusing on querying local private documents without compromising data privacy. Two approaches will be discussed in this chapter: