7 Auto Encoding and Self Supervision

 

This chapter covers

  • Training without labels.
  • Auto Encoding to project data.
  • Constraining networks with bottlenecks.
  • Adding noise to to improve performance.
  • Predicting the next item to make generative models.

At this point we have learned about a number of different approaches to specifying a neural network, and we have done this for classification and regression problems. These are the classic machine learning problems, where for each data point x (e.g., a picture of a fruit) we have an associated answer y (e.g., fresh or rotten). But what if we do not have a label y? Is there any useful way for us to learn? You should recognize this as an unsupervised learning scenario.

7.1    Auto Encoding

 
 

7.1.1 Principle Component Analysis is an Auto Encoder

 
 

7.1.2 Implementing PCA

 
 

7.1.3 Visualizing Auto Encoder Results

 
 

7.2    Auto-Encoding Networks

 
 

7.2.1 Implementing an Auto Encoder

 
 

7.2.2 Auto Encoder Results

 
 
 

7.3    Bigger Auto Encoders

 
 
 

7.3.1 Robustness to Noise

 

7.3.2 Denoising Auto Encoders

 
 
 

7.4    Autoregressive Models

 
 
 

7.4.1 Implementing Char-RNN

 
 
 

7.4.2 Autoregressive Models are Generative Models

 
 
 

7.4.3 Faster Sampling

 
 
 
 

7.5    Exercises

 
 

7.6    Summary

 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage