This chapter covers
- Understanding the design principles and patterns for DNN and CNN autoencoders
- Coding these models using the procedural design pattern
- Regularization when training an autoencoder
- Using an autoencoder for compression, denoising, and super resolution
- Using an autoencoder for pretraining to improve the model’s ability to generalize
Up to now, we’ve discussed only models for supervised learning. An autoencoder model falls into the category of unsupervised learning. As a reminder, in supervised learning our data consists of the features (for example, image data) and labels (for example, classes), and we train the model to learn to predict the labels from the features. In unsupervised learning, we either have no labels or don’t use them, and we train the model to find correlating patterns in the data. You might ask, what can we do without labels? We can do a lot of things, and autoencoders are one type of model architecture that can learn from unlabeled data.