5 Saliency mapping

 

This chapter covers

  • Characteristics that make convolutional neural networks inherently black-box
  • How to implement convolutional neural networks for image classification tasks
  • How to interpret convolutional neural networks using saliency mapping techniques, such as vanilla backpropagation, guided backpropagation, guided Grad-CAM, and SmoothGrad
  • Strengths and weaknesses of these saliency mapping techniques and how to perform sanity checks on them

In the previous chapter, we looked at deep neural networks and learned how to interpret them using model-agnostic methods that are local in scope. We specifically learned three techniques: LIME, SHAP, and anchors. In this chapter, we will focus on convolutional neural networks (CNNs), a more complex neural network architecture used mostly for visual tasks such as image classification, image segmentation, object detection, and facial recognition. We will learn how to apply techniques learned in the previous chapter to CNNs. In addition, we will also focus on saliency mapping, which is a local, model-dependent, and post hoc interpretability technique. Saliency mapping is a great tool for interpreting CNNs because it helps us visualize the salient or important features for the model. We will specifically cover techniques such as vanilla backpropagation, guided backpropagation, integrated gradients, SmoothGrad, Grad-CAM, and guided Grad-CAM.

5.1 Diagnostics+ AI: Invasive ductal carcinoma detection

5.2 Exploratory data analysis

5.3 Convolutional neural networks

5.3.1 Data preparation

5.3.2 Training and evaluating CNNs

5.4 Interpreting CNNs

5.4.1 Probability landscape

5.4.2 LIME

5.4.3 Visual attribution methods

5.5 Vanilla backpropagation

5.6 Guided backpropagation

5.7 Other gradient-based methods

5.8 Grad-CAM and guided Grad-CAM

5.9 Which attribution method should I use?

Summary