We are now approaching the end of our journey through the world of interpretable AI. Figure 9.1 provides a map of this journey. Let’s take a moment to reflect on and to summarize what we have learned. Interpretability is all about understanding the cause and effect within an AI system. It is the degree to which we can consistently estimate what the underlying models in the AI system will predict given an input, understand how the models came up with the prediction, understand how the prediction changes with modifications to the input or algorithmic parameters, and finally understand when the models have made a mistake. Interpretability is becoming increasingly important because machine learning models are proliferating in various industries such as finance, healthcare, technology, and legal, to name a few. Decisions made by such models require transparency and fairness. The techniques that we have learned in this book are powerful tools to improve transparency and ensure fairness.