4 Structuring Deep Learning Projects and Hyperparameters tuning

 

 “With four parameters I can fit an elephant and with five I can make him wiggle his trunk”

-- John von Neumann

This chapter concludes the first part of this book, the deep learning foundation. In chapter 2, you learned how to build a multilayer perceptron (MLP). In chapter 3, you learned a new neural network architecture topology that are very commonly used in computer vision problems called convolutional neural networks (CNNs). In this chapter, we will wrap up this topic by discussing how to structure your machine learning project from the start to finish. You will learn strategies to quickly and efficiently get your deep learning systems working, analyze the results and improve the network performance.

As you might have already noticed from the previous projects, deep learning is a very empirical process. It mostly relies on running experiments and observing the model performance more than having one magic formula for success that fits all problems. We often have an idea of the starting solution, code it up, run the experiment to see how it did, and then use the outcome of this experiment to refine our ideas. When building and tuning a neural network, you will find yourself making many seemingly arbitrary decisions:

4.1   Define the performance metrics

4.1.1   Is accuracy the best metric to evaluate the model?

4.1.2   Confusion matrix

4.1.3   Precision and Recall

4.1.4   F-Score

4.3.1   Split your data into train/validation/test datasets

4.3.2   Data preprocessing

4.4.1   Diagnose for overfitting and underfitting

4.4.2   Plot the learning curves

4.4.3   Exercise: build, train, evaluate a simple network

4.5.1   When to collect more data vs tuning hyperparameters?

4.5.2   Parameters vs. hyperparameters

4.5.3   Neural networks hyperparameters

sitemap