chapter seven

7 How do you measure classification models? - Accuracy and its friends

 

This chapter covers:

  • How accuracy can help us evaluate models.
  • Types of errors a model can make: False positives and false negatives.
  • How to put these errors in a table: The confusion matrix.
  • What is precision, and what models require this metric?
  • What is recall, and what models require this metric?
  • Metrics that combine both recall and precision, such as the F-1 score and Fβ-score
  • Two new ways to evaluate a classification model: Sensitivity and specificity.
  • What is the threshold of a classification model, and what models have this feature?
  • How does changing the threshold of a model affect the sensitivity and specificity?
  • What is the ROC curve, and how does it keep track of sensitivity and specificity while changing the threshold of the model?
  • What is the area under the curve (AUC), and how does it evaluate our classification models?
  • Performing a trade-off between sensitivity and specificity in order to pick the best model that solves our problem in hand

7.1   Accuracy - How often is my model correct?

7.1.1   Two examples of models - Coronavirus and spam email

7.1.2   A super effective yet super useless model

7.2   How to fix the problem? Defining different types of errors and how to measure them

7.2.1   False positives, false negatives, and which one is worse?

7.2.2   Storing the correctly and incorrectly classified points in a table - the confusion matrix

7.2.3   Recall - Among the positive examples, how many did we correctly classify?

7.2.4   Precision - Among the examples we classified as positive, how many did we correctly classify?

7.2.5   Combining recall and precision as a way to optimize both - F-1 and F β - scores

7.2.6   Recall, precision, or F-scores - Which one should I use?

7.3   A very useful tool to evaluate our model - The receiver-operating characteristic (ROC) curve

7.3.1   Sensitivity and specificity - two new ways to evaluate our model (actually only one of them is new)

7.3.2   The receiver-operating characteristic (ROC) curve

7.3.3   A metric that tells us how good our model is - The AUC (area under the curve)

7.3.4   How to make decisions using the ROC curve