chapter five

5 Saliency mapping

 

This chapter covers:

  • Characteristics that make convolutional neural networks inherently black box
  • How to implement convolutional neural networks for image classification tasks
  • How to interpret convolutional neural networks using saliency mapping techniques such as vanilla backpropagation, guided backpropagation, guided Grad-CAM and SmoothGrad
  • Strengths and weaknesses of these saliency mapping techniques and how to perform sanity checks on them

In the previous chapter, we looked at deep neural networks and learned how to interpret them using model agnostic methods that are local in scope. We specifically learned three techniques – LIME, SHAP and Anchors. In this chapter, we will focus on Convolutional Neural Networks (CNNs), a more complex neural network architecture used mostly for visual tasks such as image classification, image segmentation, object detection and facial recognition. The techniques learned in the previous chapter can be applied to CNNs and we will learn how to do that. In addition, we will also focus on saliency mapping, which is a local, model dependent and post-hoc interpretability technique. Saliency mapping is a great tool to interpret CNNs as it helps us visualize the salient or important features for the model. We will specifically be covering techniques such as vanilla backpropagation, guided backpropagation, integrated gradients, SmoothGrad, Grad-CAM and guided Grad-CAM.

5.1           Diagnostics+ AI – Invasive Ductal Carcinoma Detection

5.2           Exploratory Data Analysis

5.3           Convolutional Neural Networks

5.3.1   Data Preparation

5.3.2   Training and Evaluating CNNs

5.4           Interpreting CNNs

5.4.1   Probability Landscape

5.4.2   LIME

5.4.3   Visual Attribution Methods

5.5           Vanilla Backpropagation

5.6           Guided Backpropagation

5.7           Other Gradient-based Methods

5.8           Grad-CAM and Guided Grad-CAM

5.9           Which attribution method should I use?

5.10       Summary