8 Evolving autoencoders

 

This chapter covers

  • Introducing convolutional autoencoders
  • Discussing genetic encoding in a convolutional autoencoder network
  • Applying mutation and mating to develop an evolutionary autoencoder
  • Building and evolving autoencoder architecture
  • Introducing a convolutional variational autoencoder

In the last chapter, we covered how convolutional neural network (CNN) architecture could be adapted using evolutionary algorithms. We used genetic algorithms to encode a gene sequence defining a CNN model for image classification. The outcome was successfully building more optimized networks for image recognition tasks.

In this chapter, we continue to extend the fundamentals and explore evolving autoencoders (AEs). We take some of our experience from building evolving CNN architecture in the last chapter and apply it to convolutional AEs. Then, we move on to more advanced variational AEs and explore novel ways of evolving model loss.

AEs are a foundation to DL that introduces unsupervised and representation learning. Chances are if you have spent any time studying DL, you have encountered AEs and variational AEs. From the perspective of EDL, they introduce some novel applications we explore in this chapter.

AEs come in several variations, from under complete or standard to deep and convolutional. The deep convolutional AE is a great one to begin with, since it extends many ideas from previous chapters, and it’s where we start this chapter.

8.1 The convolution autoencoder

8.1.1 Introducing autoencoders

8.1.2 Building a convolutional autoencoder

8.1.3 Learning exercises

8.1.4 Generalizing a convolutional AE

8.1.5 Improving the autoencoder

8.2 Evolutionary AE optimization

8.2.1 Building the AE gene sequence

8.2.2 Learning exercises

8.3 Mating and mutating the autoencoder gene sequence

8.4 Evolving an autoencoder

8.4.1 Learning exercises