chapter seven
7 How do you measure classification models? - Accuracy and its friends
This chapter covers:
- How accuracy can help us evaluate models.
- Types of errors a model can make: False positives and false negatives.
- How to put these errors in a table: The confusion matrix.
- What is precision, and what models require this metric?
- What is recall, and what models require this metric?
- Metrics that combine both recall and precision, such as the F-1 score and Fβ-score
- Two new ways to evaluate a classification model: Sensitivity and specificity.
- What is the threshold of a classification model, and what models have this feature?
- How does changing the threshold of a model affect the sensitivity and specificity?
- What is the ROC curve, and how does it keep track of sensitivity and specificity while changing the threshold of the model?
- What is the area under the curve (AUC), and how does it evaluate our classification models?
- Performing a trade-off between sensitivity and specificity in order to pick the best model that solves our problem in hand