This chapter covers
- Overviewing generative adversarial networks
- Understanding problems in generative adversarial network optimization
- Fixing generative adversarial network problems by applying Wasserstein loss
- Creating a generative adversarial network encoder for evolutionary optimization
- Evolving a deep convolutional generative adversarial network with genetic algorithms
In the last chapter, we were introduced to autoencoders (AEs) and learned how features could be extracted. We learned how to apply evolution to the network architecture optimization of an AE, and then we covered the variational AE that introduced the concept of generative deep learning, or representative learning.
In this chapter, we continue exploring representation learning, this time by looking at generative adversarial networks (GANs). GANs are a fascinating topic worthy of several books, but for our purposes, we only need to explore the basics. So in this chapter, we look at the fundamentals of the GAN and how it may be optimized with evolution.