chapter seven

7 How do you measure classification models? Accuracy and its friends

 

In this chapter

  • types of errors a model can make: false positives and false negatives
  • putting these errors in a table: the confusion matrix
  • what are accuracy, recall, precision, F-score, sensitivity, and specificity, and how are they used to evaluate models?
  • what is the ROC curve, and how does it keep track of sensitivity and specificity at the same time?

This chapter is slightly different from the previous two—it doesn’t focus on building classification models; instead, it focuses on evaluating them. For a machine learning professional, being able to evaluate the performance of different models is as important as being able to train them. We seldom train a single model on a dataset; we train several different models and select the one that performs best. We also need to make sure models are of good quality before putting them in production. The quality of a model is not always trivial to measure, and in this chapter, we learn several techniques to evaluate our classification models. In chapter 4, we learned how to evaluate regression models, so we can think of this chapter as its analog but with classification models.

Accuracy: How often is my model correct?

Two examples of models: Coronavirus and spam email

A super effective yet super useless model

How to fix the accuracy problem? Defining different types of errors and how to measure them

False positives and false negatives: Which one is worse?

Storing the correctly and incorrectly classified points in a table: The confusion matrix

Recall: Among the positive examples, how many did we correctly classify?

Precision: Among the examples we classified as positive, how many did we correctly classify?

Combining recall and precision as a way to optimize both: The F-score

Calculating the F-score

Recall, precision, or F-scores: Which one should we use?

A useful tool to evaluate our model: The receiver operating characteristic (ROC) curve

Sensitivity and specificity: Two new ways to evaluate our model

The receiver operating characteristic (ROC) curve: A way to optimize sensitivity and specificity in a model

A metric that tells us how good our model is: The AUC (area under the curve)

How to make decisions using the ROC curve

Recall is sensitivity, but precision and specificity are different

Summary

Exercises