chapter five

5 Privacy

 

This chapter covers

  • When GenAI privacy fails in practice.
  • Four pillars of data protection in GenAI
  • Practical steps to reduce risks in each pillar
  • How deployment posture changes privacy risks
  • What evidence regulators look for

Let’s take this hypothetical scenario: when Elena Rodriguez filed her insurance claim with ElectraMotors, she had no idea that her conversation would be stored, analyzed, and eventually used in ways she never consented to. Six months later, when she called to dispute an unrelated billing issue, the chatbot greeted her by name and referenced details from a claim 10 years ago; details she hadn’t mentioned. Worse, some of what it “remembered” was wrong: it claimed she had a prior claim for a rear collision that never happened. Elena’s experience illustrates a new class of privacy failure that traditional data protection frameworks were never designed to handle.

5.1 The Four Pillars of GenAI Privacy

5.2 Collection and Purpose

5.2.1 No Valid Legal Basis

5.2.2 Overcollection

5.2.3 Vendor’s Inappropriate Use of Organizational Data

5.2.4 Adoption Model Differences

5.3 Storage and Memorization

5.3.1 Memorization and Membership Inference

5.3.2 Insufficient deletion of embeddings

5.3.3 Insufficient Deletion of (Meta)data

5.3.4 Adoption Model Differences

5.4 Output Integrity

5.4.1 Hallucinations and Defamations

5.4.2 Overreliance and Automated Decision Making

5.4.3 Adoption Model Differences

5.5 User Rights & Governance

5.5.1 Making models forget and correct

5.5.2 Data Subject Access Requests (DSAR)

5.5.3 Transparency

5.5.4 Adoption Model Differences

5.6 Agentic AI and Privacy

5.6.1 Purpose Limitation (and Data Minimization) Under Strain

5.6.2 Lawful Basis Challenges for Autonomous Processing

5.6.3 Storage Limitation and Erasure Rights in an Agent Context

5.6.4 Governance and Operational Challenges in the Long Term

5.6.5 Summary of Privacy Risks and Mitigations in Agentic AI

5.7 Summary