2 Deep learning and neural networks

 

This chapter covers

  • Understanding perceptrons and multilayer perceptrons
  • Working with the different types of activation functions
  • Training networks with feedforward, error functions, and error optimization
  • Performing backpropagation

In the last chapter, we discussed the computer vision (CV) pipeline components: the input image, preprocessing, extracting features, and the learning algorithm (classifier). We also discussed that in traditional ML algorithms, we manually extract features that produce a vector of features to be classified by the learning algorithm, whereas in deep learning (DL), neural networks act as both the feature extractor and the classifier. A neural network automatically recognizes patterns and extracts features from the image and classifies them into labels (figure 2.1).

Figure 2.1 Traditional ML algorithms require manual feature extraction. A deep neural network automatically extracts features by passing the input image through its layers.

In this chapter, we will take a short pause from the CV context to open the DL algorithm box from figure 2.1. We will dive deeper into how neural networks learn features and make predictions. Then, in the next chapter, we will come back to CV applications with one of the most popular DL architectures: convolutional neural networks.

The high-level layout of this chapter is as follows:

2.1 Understanding perceptrons

 
 
 
 

2.1.1 What is a perceptron?

 

2.1.2 How does the perceptron learn?

 
 
 

2.1.3 Is one neuron enough to solve complex problems?

 
 

2.2 Multilayer perceptrons

 
 

2.2.1 Multilayer perceptron architecture

 
 

2.2.2 What are hidden layers?

 
 

2.2.3 How many layers, and how many nodes in each layer?

 
 

2.2.4 Some takeaways from this section

 
 
 
 

2.3 Activation functions

 
 
 
 

2.3.1 Linear transfer function

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage