9 Generative deep learning and evolution

 

This chapter covers

  • Overviewing generative adversarial networks
  • Understanding problems in generative adversarial network optimization
  • Fixing generative adversarial network problems by applying Wasserstein loss
  • Creating a generative adversarial network encoder for evolutionary optimization
  • Evolving a deep convolutional generative adversarial network with genetic algorithms

In the last chapter, we were introduced to autoencoders (AEs) and learned how features could be extracted. We learned how to apply evolution to the network architecture optimization of an AE, and then we covered the variational AE that introduced the concept of generative deep learning, or representative learning.

In this chapter, we continue exploring representation learning, this time by looking at generative adversarial networks (GANs). GANs are a fascinating topic worthy of several books, but for our purposes, we only need to explore the basics. So in this chapter, we look at the fundamentals of the GAN and how it may be optimized with evolution.

GANs are notoriously difficult to train, so being able to optimize this process with evolution will be beneficial. We start by introducing the basic, or what is often referred to as the “vanilla,” GAN in the next section.

9.1 Generative adversarial networks

9.1.1 Introducing GANs

9.1.2 Building a convolutional generative adversarial network in Keras

9.1.3 Learning exercises

9.2 The challenges of training a GAN

9.2.1 The GAN optimization problem

9.2.2 Observing vanishing gradients

9.2.3 Observing mode collapse in GANs

9.2.4 Observing convergence failures in GANs

9.2.5 Learning exercises

9.3 Fixing GAN problems with Wasserstein loss

9.3.1 Understanding Wasserstein loss

9.3.2 Improving the DCGAN with Wasserstein loss