7 Teaching machines to see better: improving CNNs and making them confess
- Reducing overfitting of image classifiers by means of data augmentation, regularization, smart model training schemes
- Principally choosing and implementing other model architectures (i.e. Minception inspired by Inception Resnet v2 model) that deliver better performance
- Implementing a robust performant image classifier by using Keras pretrained models and transfer learning.
- Interpreting the image classifiers knowledge by using modern machine learning interpretability techniques such as GradCAM
7.1 Techniques for reducing overfitting
7.1.1 Image data Augmentation with Keras
7.1.2 Dropout: Randomly switching off parts of your network to improve generalizability
7.1.3 Early stopping: Halting the training process if the network starts to under-perform
7.2 Towards minimalism: Minception instead of Inception
7.2.1 Implementing the stem
7.2.2 Implementing Inception Resnet A block
7.2.3 Implementing the Inception Resnet B block
7.2.4 Implementing the Reduction block
7.2.5 Putting everything together
7.2.6 Training Minception
7.3 If you can't beat them, join ‘em: Using pretrained networks for enhancing performance
7.3.1 Transfer learning: Reusing existing knowledge in deep neural networks
7.4 GradCAM: Making CNNs confess
7.5 Summary
7.6 Answers