This chapter covers
- The idea behind CycleGAN and cycle consistency loss
- Building a CycleGAN model to translate images from one domain to another
- Training a CycleGAN by using any dataset with two domains of images
- Converting black hair to blond hair and vice versa
The generative adversarial networks (GAN) models we have discussed in the last three chapters are all trying to produce images that are indistinguishable from those in the training set.
You may be wondering: Can we translate images from one domain to another, such as transforming horses into zebras, converting black hair to blond hair or blond hair to black, adding or removing eyeglasses in images, turning photographs into paintings, or converting winter scenes to summer scenes? It turns out you can, and you’ll acquire such skills in this chapter through CycleGAN!
CycleGAN was introduced in 2017.1 The key innovation of CycleGAN is its ability to learn to translate between domains without paired examples. CycleGAN has a variety of interesting and useful applications, such as simulating the aging or rejuvenation process on faces to assist digital identity verification or visualizing clothing in different colors or patterns without physically creating each variant to streamline the design process.