10 The future of responsible AI: Risks, practices, and policy

 

This chapter covers

  • Where LLM development is headed
  • The social and technical risks identified throughout the book
  • Best practices for responsible AI development and use
  • Regional and global approaches to AI regulation
  • Envisioning possible paths toward global AI governance

In an infamous article for Newsweek in 1995, astronomer Clifford Stoll wrote the following about the nascent online community:

Today, I’m uneasy about this most trendy and oversold community. Visionaries see a future of telecommuting workers, interactive libraries, and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic. Baloney. Do our computer pundits lack all common sense? The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher, and no computer network will change the way government works [1].

For better and for worse, the internet has done much more than Stoll expected. Digital networks have made government more democratic in some ways, but concentrated the power of authoritarians in others; they have connected people across the globe, but have also been tied to increasing social isolation; and they have reshaped the global economy.

10.1 Where are LLM developments headed?

10.1.1 Language as the universal interface

10.1.2 From tools to agentic systems

10.1.3 The rise of personalized AI

10.1.4 On the horizon

10.2 Sociotechnical risks of generative AI

10.2.1 Bias, toxicity, and representational harms

10.2.2 Hallucinations, fabrications, and epistemic harm

10.2.3 Autonomy and emergent agentic risks

10.2.4 Misuse across domains

10.2.5 Dependency, emotional harm, and relationship risks

10.2.6 Labor and economic disruption

10.2.7 A holistic view of harm

10.3 Best practices for responsible AI development and use

10.3.1 Curating datasets and standardizing documentation

10.3.2 Protecting data privacy

10.3.3 Explainability, transparency, and bias

10.3.4 Design interventions and architectures

10.3.5 Model training strategies for safety

10.3.6 Red teaming and evaluation

10.3.7 Detecting and tracing synthetic media

10.3.8 Platform responsibility and user safeguards

10.3.9 Humans in the loop

10.3.10 Education and digital literacy

10.3.11 Toward responsible generative AI

10.4 AI regulations in practice

10.4.1 The United States

10.4.2 The European Union

10.4.3 China

10.4.4 Corporate self-governance

10.5 Toward an AI governance framework

10.6 Conclusion