9 AutoEncoders
This chapter covers
- Understanding the design principles and patterns for DNN and CNN autoencoders.
- Coding these models using the procedural design pattern.
- Regularization when training an autoencoder.
- Using an autoencoder for compression, denoising, and super resolution.
- Using an autoencoder for pretraining as an unsupervised pre-text task to improve the model’s ability to generalize.
Up to now we’ve only discussed models for supervised learning. An autoencoder model falls into the category of unsupervised learning. As a reminder,, in supervised learning ourdata consists of the features (e.g., image data) and labels (e.g., classes) and we train the model to learn to predict the labels from the features. In unsupervised learning, we either have no labels or we don’t use them, and we train the model to find correlating patterns in the data. You might ask, what can we do without labels? We can do a lot of things, and AutoEncoders are one type of model architecture that can learn from unlabeled data.
AutoEncoders are the fundamental deep learning models for unsupervised learning. Even without human labeling, autoencoders can learn image compression, representational learning, image denoising, super-resolution and pre-text tasks-- and we’ll cover each of these in this chapter.