6 Using a neural network to fit the data

 

This chapter covers

  • Nonlinear activation functions as the key difference compared with linear models
  • Working with PyTorch’s nn module
  • Solving a linear-fit problem with a neural network

So far, we’ve taken a close look at how a linear model can learn and how to make that happen in PyTorch. We’ve focused on a very simple regression problem that used a linear model with only one input and one output. Such a simple example allowed us to dissect the mechanics of a model that learns, without getting overly distracted by the implementation of the model itself. As we saw in the overview diagram in chapter 5, figure 5.2 (repeated here as figure 6.1), the exact details of a model are not needed to understand the high-level process that trains the model. Backpropagating errors to parameters and then updating those parameters by taking the gradient with respect to the loss is the same no matter what the underlying model is.

Figure 6.1 Our mental model of the learning process, as implemented in chapter 5

6.1 Artificial neurons

 
 
 
 

6.1.1 Composing a multilayer network

 
 
 

6.1.2 Understanding the error function

 

6.1.3 All we need is activation

 
 
 

6.1.4 More activation functions

 
 
 

6.1.5 Choosing the best activation function

 
 

6.1.6 What learning means for a neural network

 
 
 

6.2 The PyTorch nn module

 
 

6.2.1 Using __call__ rather than forward

 
 

6.2.2 Returning to the linear model

 

6.3 Finally a neural network

 
 

6.3.1 Replacing the linear model

 
 
 

6.3.2 Inspecting the parameters

 
 
 

6.3.3 Comparing to the linear model

 
 

6.4 Conclusion

 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage