5 The mechanics of learning

 

This chapter covers

  • Understanding how algorithms can learn from data
  • Reframing learning as parameter estimation, using differentiation and gradient descent
  • Walking through a simple learning algorithm
  • How PyTorch supports learning with autograd

With the blooming of machine learning that has occurred over the last decade, the notion of machines that learn from experience has become a mainstream theme in both technical and journalistic circles. Now, how is it exactly that a machine learns? What are the mechanics of this process--or, in words, what is the algorithm behind it? From the point of view of an observer, a learning algorithm is presented with input data that is paired with desired outputs. Once learning has occurred, that algorithm will be capable of producing correct outputs when it is fed new data that is similar enough to the input data it was trained on. With deep learning, this process works even when the input data and the desired output are far from each other: when they come from different domains, like an image and a sentence describing it, as we saw in chapter 2.

5.1 A timeless lesson in modeling

 
 
 

5.2 Learning is just parameter estimation

 
 
 
 

5.2.1 A hot problem

 
 
 

5.2.2 Gathering some data

 
 
 

5.2.3 Visualizing the data

 
 
 
 

5.2.4 Choosing a linear model as a first try

 

5.3 Less loss is what we want

 
 
 

5.3.1 From problem back to PyTorch

 
 

5.4 Down along the gradient

 
 
 

5.4.1 Decreasing loss

 
 

5.4.2 Getting analytical

 
 

5.4.3 Iterating to fit the model

 
 
 

5.4.4 Normalizing inputs

 
 
 

5.4.5 Visualizing (again)

 
 
 

5.5 PyTorch’s autograd: Backpropagating all things

 
 
 
 

5.5.1 Computing the gradient automatically

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage