13 Explainable outlier detection

 

This chapter covers

  • Introducing eXplainable AI (XAI)
  • Describing XAI in outlier detection
  • Presenting methods to explain black-box outlier detectors
  • Presenting interpretable outlier detectors not covered previously

When performing outlier detection, it’s often important to know not just the scores given to each record but why the records were given the scores they were. There are at least two situations where this is necessary. The first is in assessing the detectors, as was introduced in chapter 8. During this step, we determine if the detectors produce sensible scores for the known outliers that we test with, but we also wish to know why the records were given these scores. To be confident that we have a useful outlier detection system, and that the detectors will produce reasonable scores for future data as well, we want to know the detectors are not only correct but correct for the right reasons.

13.1 Introducing XAI

13.1.1 Interpretability vs. explainability

13.1.2 Global vs. local explanations

13.2 Post hoc explanations

13.2.1 Feature importances

13.2.2 Proxy models

13.2.3 Plotting

13.2.4 Counterfactuals

13.2.5 General notes on post hoc explanations

13.3 Interpretable outlier detectors

13.3.1 Outlier detection on sets of 2D subspaces

13.3.2 Bayesian Histogram-based Anomaly Detection

13.3.3 CountsOutlierDetector

13.3.4 DataConsistencyChecker

Summary