4 Structuring DL projects and hyperparameter tuning

 

This chapter covers

  • Defining performance metrics
  • Designing baseline models
  • Preparing training data
  • Evaluating a model and improving its performance

This chapter concludes the first part of this book, providing a foundation for deep learning (DL). In chapter 2, you learned how to build a multilayer perceptron (MLP). In chapter 3, you learned about a neural network architecture topology that is very commonly used in computer vision (CV) problems: convolutional neural networks (CNNs). In this chapter, we will wrap up this foundation by discussing how to structure your machine learning (ML) project from start to finish. You will learn strategies to quickly and efficiently get your DL systems working, analyze the results, and improve network performance.

As you might have already noticed from the previous projects, DL is a very empirical process. It relies on running experiments and observing model performance more than having one go-to formula for success that fits all problems. We often have an initial idea for a solution, code it up, run the experiment to see how it did, and then use the outcome of this experiment to refine our ideas. When building and tuning a neural network, you will find yourself making many seemingly arbitrary decisions:

4.1 Defining performance metrics

4.1.1 Is accuracy the best metric for evaluating a model?

4.1.2 Confusion matrix

4.1.3 Precision and recall

4.1.4 F-score

4.2 Designing a baseline model

4.3 Getting your data ready for training

4.3.1 Splitting your data for train/validation/test

4.3.2 Data preprocessing

4.4 Evaluating the model and interpreting its performance

4.4.1 Diagnosing overfitting and underfitting

4.4.2 Plotting the learning curves

4.4.3 Exercise: Building, training, and evaluating a network

4.5 Improving the network and tuning hyperparameters

sitemap