chapter four
4 Securing GenAI
This chapter covers
- SaaS vs API vs self-host security tradeoffs
- Real failures: injection, leakage, model theft, data poisoning
- Guardrails: prompts, policies, output checks
Security in conventional software hinges on predictability. Inputs are validated, outputs are constrained, and execution paths are deterministic. When something breaks, it’s usually because of a bug, misconfiguration, or known exploit. Generative AI upends this model. Instead of simply processing instructions, these systems infer, generalize, and improvise, producing different outputs for similar inputs. That flexibility introduces a new kind of risk: not from broken code, but from unexpected and emergent behaviors that are difficult to anticipate or control.