chapter eight

8 Chatting with your data

 

This chapter covers

  • Understanding how bringing your data is beneficial for enterprises
  • Installing and using a vector database and vector index
  • Planning and retrieving your proprietary data
  • Searching using a vector database
  • Implementing an end-to-end chat powered by RAG using a vector database and an LLM
  • Understanding the benefits of bringing your data and RAG jointly
  • Understanding how RAG benefits AI Safety for enterprises

Utilizing LLMs for a chat-with-data implementation is a promising strategy uniquely suited for enterprises seeking to harness the power of generative AI for their specific business requirements. By synergizing the capabilities of LLMs with enterprise-specific data sources and tools, businesses can forge intelligent and context-aware chatbots that deliver invaluable insights and recommendations to their clientele and stakeholders.

At a high level, there are two ways to chat with your data using an LLM – one is using a retrieval engine as implemented using the RAG pattern, and another is to custom train the LLM on your data. This is more involved and complex and not something available to most.

8.1 Advantages for Enterprises of Using Their Data

8.2 Using a Vector Database

8.3 Planning for retrieving the information

8.3.1 Create the Index

8.4 Retrieve the Data

8.4.1 Retriever Pipeline Best Practices

8.5 Search using Redis

8.6 An end-to-end Chat implementation powered by RAG

8.7 Using Azure OpenAI on your data

8.8 Benefits of Bring Your Data Using RAG

8.9 Summary