concept overfitting in category python

This is an excerpt from Manning's book Deep Learning with Python, Second Edition MEAP V04.
The test-set accuracy turns out to be 97.8% — that’s quite a bit lower than the training set accuracy (98.9%). This gap between training accuracy and test accuracy is an example of overfitting: the fact that machine-learning models tend to perform worse on new data than on their training data. Overfitting is a central topic in chapter 3.
Overfitting is particularly likely to occur when your data is noisy, if it involves uncertainty, or if it includes rare features. Let’s look at concrete examples.
Figure 5.4. Dealing with outliers: robust fit vs. overfitting
![]()

This is an excerpt from Manning's book Deep Learning with Python.
The test-set accuracy turns out to be 97.8%—that’s quite a bit lower than the training set accuracy. This gap between training accuracy and test accuracy is an example of overfitting: the fact that machine-learning models tend to perform worse on new data than on their training data. Overfitting is a central topic in chapter 3.
The model quickly starts overfitting, which is unsurprising given the small number of training samples. Validation accuracy has high variance for the same reason, but it seems to reach the high 50s.