9 Generative adversarial networks
This chapter covers
- Working with generative models for fully connected and convolutional networks
- Encoding concepts using latent vectors
- Training two networks that cooperate
- Manipulating generation using a conditional model
- Manipulating generation with vector arithmetic
Most of what we have learned thus far has been a one-to-one mapping. Every input has one correct class/output. The dog can only be a “dog”; the sentence is only “positive” or “negative.” But we can also encounter one-to-many problems where there is more than one possible answer. For example, we may have the concept of “seven” as input and need to create several different kinds of pictures of the digit 7. Or, to colorize an old black-and-white photograph, we could produce multiple possible color images that were all equally valid. For one-to-many problems, we can use a generative adversarial network (GAN). Like other unsupervised models such as the autoencoder, we can use the representation a GAN learns as the input to other AI/ML algorithms and tasks. But the representation a GAN learns is often more meaningful, allowing us to manipulate our data in new ways. For example, we could take a picture of a frowning person and have the algorithm alter the image so the person is smiling.