13 Explainable outlier detection
This chapter covers
- Introducing eXplainable AI (XAI)
- Describing XAI in outlier detection
- Presenting methods to explain black-box outlier detectors
- Presenting interpretable outlier detectors not covered previously
When performing outlier detection, it’s often important to know not just the scores given to each record but why the records were given the scores they were. There are at least two situations where this is necessary. The first is in assessing the detectors, as was introduced in chapter 8. During this step, we determine if the detectors produce sensible scores for the known outliers that we test with, but we also wish to know why the records were given these scores. To be confident that we have a useful outlier detection system, and that the detectors will produce reasonable scores for future data as well, we want to know the detectors are not only correct but correct for the right reasons.