Chapter 7. A peek into autoencoders

This chapter covers

  • Getting to know neural networks
  • Designing autoencoders
  • Representing images by using an autoencoder

Have you ever heard a person humming a melody, and identified the song? It might be easy for you, but I’m comically tone-deaf when it comes to music. Humming, of itself, is an approximation of a song. An even better approximation could be singing. Include some instrumentals, and sometimes a cover of a song sounds indistinguishable from the original.

Instead of songs, in this chapter, you’ll approximate functions. Functions are a general notion of relations between inputs and outputs. In machine learning, you typically want to find the function that relates inputs to outputs. Finding the best possible function fit is difficult, but approximating the function is much easier.

Conveniently, artificial neural networks are a model in machine learning that can approximate any function. As you’ve learned, your model is a function that gives the output you’re looking for, given the inputs you have. In ML terms, given training data, you want to build a neural network model that best approximates the implicit function that might have generated the data—one that might not give you the exact answer but that’s good enough to be useful.

7.1. Neural networks

7.2. Autoencoders

7.3. Batch training

7.4. Working with images

7.5. Application of autoencoders

7.6. Summary