6 AutoEncoders
This chapter covers
- The design principles and patterns for deep neural network and convolutional neural network autoencoders.
- Coding examples of these models using the procedural design pattern.
- Regularization when training an autoencoder.
- Using an autoencoder for compression, denoising, and super resolution.
Using an autoencoder for pretraining as an unsupervised pre-text task to improve the model’s ability to generalize.
Up to now we’ve only discussed models for supervised learning. An autoencoder model falls into the category of unsupervised learning. That is, in supervised learning we have data which consists of the features (e.g., image data) and labels (e.g., classes) and we train the model to learn to predict the labels. In unsupervised learning, we either have no labels or we don’t use them, and we train the model to find correlating patterns in the data.
So you ask, what can we do without labels? We can do a lot of things! AutoEncoders are the fundamental deep learning models for unsupervised learning. Even without human labeling, autoencoders can learn image compression, representational learning, image denoising, super-resolution and pre-text tasks; each of which we will cover in this chapter.