13 Guide to ethical GenAI: Principles, practices, and pitfalls

 

This chapter covers

  • GenAI risks, including hallucinations
  • Challenges and weaknesses of LLMs
  • Recent GenAI threats and how to prevent them
  • Responsible AI lifecycle and its various stages
  • Responsible AI tooling available today
  • Content safety and enterprise safety systems

Generative AI, a true marvel of our time, has revolutionized our ability to create and innovate. We stand at the precipice of this technological revolution, with the power to shape its effects on software, entertainment, and every facet of our daily lives. This chapter delves into the crucial balance between harnessing the power of GenAI and mitigating its potential risks—a particularly pertinent balance in enterprise deployment.

While a powerful tool, generative AI has inherent challenges that necessitate a cautious approach to deployment. Using generative AI models and applications raises numerous ethical and social considerations. These include explainability, fairness, privacy, model reliability, content authenticity, copyright, plagiarism, and environmental effects. The potential for data privacy breaches, algorithmic bias, and misuse underscores the need for a robust framework prioritizing ethical considerations and safety.

13.1 GenAI risks

13.1.1 LLM limitations

13.1.2 Hallucination

13.2 Understanding GenAI attacks

13.2.1 Prompt injection

13.2.2 Insecure output handling example

13.2.3 Model denial of service

13.2.4 Data poisoning and backdoors

13.2.5 Sensitive information disclosure

13.2.6 Overreliance

13.2.7 Model theft

13.3 A responsible AI lifecycle

13.3.1 Identifying harms

13.3.2 Measure and evaluate harms

13.3.3 Mitigate harms

13.3.4 Transparency and explainability

13.4 Red-teaming

13.4.1 Red-teaming example

13.4.2 Red-teaming tools and techniques

Summary