1 Introduction


This chapter covers

  • Different types of machine learning systems
  • How machine learning systems are built
  • What interpretability is and its importance
  • How interpretable machine learning systems are built
  • A summary of interpretability techniques covered in this book

Welcome to this book! I’m really happy that you are embarking on this journey through the world of Interpretable AI, and I look forward to being your guide. In the last five years alone, we have seen major breakthroughs in the field of artificial intelligence (AI), especially in areas such as image recognition, natural language understanding, and board games like Go. As AI augments critical human decisions in industries like healthcare and finance, it is becoming increasingly important that we build robust and unbiased machine learning models that drive these AI systems. In this book, I wish to give you a practical guide to interpretable AI systems and how to build them. Through a concrete example, this chapter will explain why interpretability is important and will lay the foundations for the rest of the book.

1.1 Diagnostics+ AI—an example AI system

Let’s now look at a concrete example of a healthcare center called Diagnostics+ that provides a service to help diagnose different types of diseases. Doctors who work for Diagnostics+ analyze blood smear samples and provide their diagnoses, which can be either positive or negative. This current state of Diagnostics+ is shown in figure 1.1.

1.2 Types of machine learning systems

1.2.1 Representation of data

1.2.2 Supervised learning

1.2.3 Unsupervised learning

1.2.4 Reinforcement learning

1.2.5 Machine learning system for Diagnostics+ AI

1.3 Building Diagnostics+ AI

1.4 Gaps in Diagnostics+ AI

1.4.1 Data leakage

1.4.2 Bias

1.4.3 Regulatory noncompliance

1.4.4 Concept drift

1.5 Building a robust Diagnostics+ AI system