chapter two

2 White-Box Models

 

This chapter covers:

  • Characteristics that make white-box models inherently transparent and interpretable
  • How to interpret simple white-box models such as linear regression and decision trees
  • What are Generalized Additive Models (GAMs) and properties that give them high predictive power and high interpretability
  • How to implement GAMs and how to interpret them
  • What are black-box models and characteristics that make them inherently opaque

In order to build an interpretable AI system, it is important to understand different types of models that can be used to drive the AI system and techniques that can be applied to interpret them. In this chapter, I will cover three key white-box models – linear regression, decision trees and Generalized Additive Models (GAMs) – that are inherently transparent. You will learn how they can be implemented, when they can be applied and how they can be interpreted. I will also briefly introduce black-box models. You will learn when they can be applied and characteristics that make them hard to interpret. This chapter is focused on interpreting white-box models and the rest of the book will be dedicated to interpreting complex black-box models.

2.1      White-Box Models

2.2      Diagnostics+ AI – Diabetes Progression

2.3      Linear Regression

2.3.1   Interpreting Linear Regression

2.3.2   Limitations of Linear Regression

2.4      Decision Trees

2.4.1   Interpreting Decision Trees

2.4.2   Limitations of Decision Trees

2.5      Generalized Additive Models (GAMs)

2.5.1   Regression Splines

2.5.2   GAM for Diagnostics+ Diabetes

2.5.3   Interpreting GAMs

2.5.4   Limitations of GAMs

2.6      Looking Ahead to Black-Box Models

2.7      Summary