chapter two

2 Adoption Models for GenAI

 

This chapter covers

  • How SaaS, API, and self-hosted GenAI models shift control and risk
  • What Agentic AI offers and which risks it introduces
  • Key risk dimensions across deployment choices
  • A primer on emerging Agentic AI
  • MediAssist, a fictional use case moving through all three adoption models

Before setting up governance controls or assessing compliance needs, even before you decide which compliance requirements apply, you need to answer a simple question: How is your organization actually adopting GenAI? It’s tempting to think the answer is obvious. After all, using these tools seems straightforward: type a prompt, get a response. But what happens behind the scenes can vary dramatically depending on the adoption path you choose. That choice determines where accountability sits, how much oversight you must perform, and how your compliance exposure shifts.

We call this your GenAI posture; your operational stance toward generative AI. Identifying which model fits your organization is important because it shapes how you manage data, security, and regulatory compliance as you deploy AI solutions.

Most organizations will fit into one or more of three broad postures:

2.1 SaaS or Application Consumers

2.2 API Integrators

2.2.1 Organization-Controlled Zone: Enhanced Customization and Governance

2.2.2 Vendor-Controlled Zone: Stability and Reliability

2.2.3 Governance and Risk Management Opportunities

2.2.4 Achieving Effective Governance for an API Integrator

2.3 Model Hosters

2.3.1 Architecture Overview: What’s Inside the Controlled Zone

2.3.2 Governance and Risk Management Opportunities

2.3.3 Achieving Effective Governance for a Model Hoster

2.4 Risk dimensions across deployment postures

2.5 Agentic AI

2.5.1 What Does “Agentic” Mean?

2.5.2 Common properties of Agentic AI

2.5.3 Why Agentic AI is still experimental

2.6 MediAssist: A case study in AI governance

2.7 Summary