2 White-box models

 

This chapter covers

  • Characteristics that make white-box models inherently transparent and interpretable
  • How to interpret simple white-box models such as linear regression and decision trees
  • What generalized additive models (GAMs) are and their properties that give them high predictive power and high interpretability
  • How to implement and interpret GAMs
  • What black-box models are and their characteristics that make them inherently opaque

To build an interpretable AI system, we must understand the different types of models that we can use to drive the AI system and techniques that we can apply to interpret them. In this chapter, I cover three key white-box models—linear regression, decision trees, and generalized additive models (GAMs)—that are inherently transparent. You will learn how they can be implemented, when they can be applied, and how they can be interpreted. I also briefly introduce black-box models. You will learn when they can be applied and their characteristics that make them hard to interpret. This chapter focuses on interpreting white-box models, and the rest of the book will be dedicated to interpreting complex black-box models.

2.1 White-box models

2.2 Diagnostics+—diabetes progression

2.3 Linear regression

2.3.1 Interpreting linear regression

2.3.2 Limitations of linear regression

2.4 Decision trees

2.4.1 Interpreting decision trees

2.4.2 Limitations of decision trees

2.5 Generalized additive models (GAMs)

2.5.1 Regression splines

2.5.2 GAM for Diagnostics+ diabetes

2.5.3 Interpreting GAMs

2.5.4 Limitations of GAMs

2.6 Looking ahead to black-box models

Summary