10 Safeguarding Generative AI
This chapter covers
- Authorization-enabling RAG results
- Securing tool invocations
- Mitigating adversarial prompts
Most organizations have documents that fall under one or more levels of classification ranging from Top Secret to Confidential to Restricted access. And not every person has the same access to applications and services that others need to do their job. Security and information rights management is an important aspect of any organization as well as in software.
You’ve seen how Retrieval Augmented Generation (RAG) and tools make it possible to integrate Generative AI with your documents and data. But not all documents and tools are intended for all users. It’s important to secure access to documents and tools to ensure that users who aren’t authorized won’t have access to them indirectly via an LLM.
Moreover, a cleverly phrased prompt submitted by a sneaky user could trick the LLM into doing something or revealing information that shouldn’t be exposed. You’ll need to apply guardrails that intercept a user’s questions and the LLM’s responses to ensure that sensitive responses aren’t returned to the users.