10 Safeguarding generative AI
This chapter covers
- Authorization-enabling retrieval-augmented generation results
- Securing tool invocations
- Mitigating adversarial prompts
Most organizations have documents that fall under one or more levels of classification, ranging from Top Secret to Confidential to Restricted access. And not every person has the same access to applications and services that others need to do their job. Security and information rights management are important aspects of any organization, as well as in software.
You’ve seen how retrieval-augmented generation (RAG) and tools make it possible to integrate generative AI with your documents and data. But not all documents and tools are intended for all users. It’s essential to secure access to documents and tools to prevent unauthorized users from gaining indirect access via an LLM.
Moreover, a cleverly phrased prompt submitted by a sneaky user could trick the LLM into doing something or revealing information that shouldn’t be exposed. You’ll need to apply guardrails that intercept a user’s questions and the LLM’s responses to ensure that sensitive responses aren’t returned to the users.