3 Generative adversarial networks: Shape and number generation

 

This chapter covers

  • Building generator and discriminator networks in generative adversarial networks from scratch
  • Using GANs to generate data points to form shapes (e.g., exponential growth curve)
  • Generating integer sequences that are all multiples of 5
  • Training, saving, loading, and using GANs
  • Evaluating GAN performance and determining training stop points

Close to half of the generative models in this book belong to a category called generative adversarial networks (GANs). The method was first proposed by Ian Goodfellow and his coauthors in 2014.1 GANs, celebrated for their ease of implementation and versatility, empower individuals with even rudimentary knowledge of deep learning to construct their models from the ground up. The word “adversarial” in GAN refers to the fact that the two neural networks compete against each other in a zero-sum game framework. The generative network tries to create data instances indistinguishable from real samples. In contrast, the discriminative network tries to identify the generated samples from real ones. These versatile models can generate various content formats, from geometric shapes and sequences of numbers to high-resolution color images and even realistic-sounding musical compositions.

3.1 Steps involved in training GANs

3.2 Preparing training data

3.2.1 A training dataset that forms an exponential growth curve

3.2.2 Preparing the training dataset

3.3 Creating GANs

3.3.1 The discriminator network

3.3.2 The generator network

3.3.3 Loss functions, optimizers, and early stopping

3.4 Training and using GANs for shape generation

3.4.1 The training of GANs

3.4.2 Saving and using the trained generator

3.5 Generating numbers with patterns

3.5.1 What are one-hot variables?

3.5.2 GANs to generate numbers with patterns

3.5.3 Training the GANs to generate numbers with patterns

3.5.4 Saving and using the trained model

Summary