chapter four

4 Securing GenAI

 

This chapter covers

  • SaaS vs API vs self-host security tradeoffs
  • Real failures: injection, leakage, model theft, data poisoning
  • Guardrails: prompts, policies, output checks

Security in conventional software hinges on predictability. Inputs are validated, outputs are constrained, and execution paths are deterministic. When something breaks, it’s usually because of a bug, misconfiguration, or known exploit. Generative AI upends this model. Instead of simply processing instructions, these systems infer, generalize, and improvise, producing different outputs for similar inputs. That flexibility introduces a new kind of risk: not from broken code, but from unexpected and emergent behaviors that are difficult to anticipate or control.

4.1 Real-World Scenarios

4.1.1 Why is Classical Security not enough?

4.2 SaaS Consumers

4.2.1 SaaS Consumer Threats

4.2.2 GenAI SaaS Provider threats

4.2.3 Reflection and Moving Forward

4.3 API Integrators

4.3.1 API Integrator threats

4.3.2 API Provider threats

4.4 Model Hosters

4.4.1 Model Artifacts & Provenance

4.4.2 Tuning & behavior drift

4.4.3 Serving & runtime surface

4.4.4 Access, misuse & theft

4.5 Summary