chapter nine
This chapter covers
- Mitigating prompt injection and API abuse
- Securing API keys and managing rate limits
- Maintaining GDPR and CCPA compliance
- Monitoring and observing AI workflows
- Deploying on hosted or self-hosted systems
- Detecting injections and applying privacy controls
Deploying AI applications introduces risks that traditional software also faces. A single misconfigured API endpoint could expose sensitive user data to adversarial prompts, while unmonitored large language model (LLM) usage might lead to astronomical costs or regulatory violations. This is no different from using any other paid service provider, where you must abide by their code of conduct.
Consider again the stack technologies we’ve been utilizing, and some of their unique challenges: