chapter one

1 Governing Generative AI

 

This chapter covers

  • Why governance matters
  • Overview of governance, risk, and compliance

Generative AI (GenAI), the family of artificial intelligence models that can write text, draft code, and create images on demand is already reshaping day-to-day work. But breakthroughs may come hand-in-hand with failures. Late one Friday afternoon, a Fortune 500 legal team discovers that two attorneys have cited court cases hallucinated by an AI assistant. The lawyers had relied on the tool’s fluent answers despite clear on-screen disclaimers that “content may be inaccurate”; now a judge is threatening sanctions for filing fictitious precedents[1]. Elsewhere, security researchers uncover a disturbing vulnerability in Google's Gemini chatbot: it can be tricked into permanently storing false "memories" about users through invisible instructions hidden within ordinary documents. In one unsettling example, Gemini was manipulated into believing a user was 102 years old and living in a fictional world[2]. Across the Pacific, Deepseek (best known for publishing a cost-efficient LLM architecture) suffers a very different failure. A separate web app operated by the company is found exposing more than a million private chat logs, API keys, and internal run-book through an unauthenticated database. The incident ignites debate not only about cyber-security hygiene but also about data-privacy obligations, unauthorized data re-use, and systemic AI-supply-chain risks[3].

1.1 Why Governance, Risk and Compliance (GRC) for GenAI Matters Now?

1.2 GRC for AI: More Than a Compliance Checklist

1.3 A Mental Model for GenAI GRC

1.4 Challenges in Practice: Illustrative Scenarios

1.5 Tools and Practices You’ll Need

1.6 Conclusion: Motivation and Map for What’s Ahead