7 How do you measure classification models? Accuracy and its friends
This chapter covers
- Types of errors a model can make: False positives and false negatives.
- Putting these errors in a table: The confusion matrix.
- What are accuracy, recall, precision, F-score, sensitivity, and specificity, and how are they used to evaluate models?
- What is the ROC curve, and how does it keep track of sensitivity and specificity at the same time?
- What is the area under the curve (AUC), and how does it evaluate our classification models?
This chapter is slightly different from the previous two, as it doesn’t focus on building classification models; instead, it focuses on evaluating them. For a machine learning professional, being able to evaluate the performance of different models is as important as being able to train them. There are many reasons for this. One is that we seldom train a single model on a dataset; we train several different models and select the one that performs best. Another reason is because we need to make sure models are of good quality before putting them in production. The quality of a model is not always trivial to measure, and in this chapter I teach you several techniques to evaluate your classification model. In chapter 4 you learned how to evaluate regression models, so you can think of this chapter as its analog, but with classification models.