chapter eight
8 Chatting with your data
This chapter covers
- Understanding how bringing your data is beneficial for enterprises
- Installing and using a vector database and vector index
- Planning and retrieving your proprietary data
- Searching using a vector database
- Implementing an end-to-end chat powered by RAG using a vector database and an LLM
- Understanding the benefits of bringing your data and RAG jointly
- Understanding how RAG benefits AI Safety for enterprises
Utilizing LLMs for a chat-with-data implementation is a promising strategy uniquely suited for enterprises seeking to harness the power of generative AI for their specific business requirements. By synergizing the capabilities of LLMs with enterprise-specific data sources and tools, businesses can forge intelligent and context-aware chatbots that deliver invaluable insights and recommendations to their clientele and stakeholders.
At a high level, there are two ways to chat with your data using an LLM – one is using a retrieval engine as implemented using the RAG pattern, and another is to custom train the LLM on your data. This is more involved and complex and not something available to most.