Chapter 5. Training and common challenges: GANing for success

 

This chapter covers

  • Meeting the challenges of evaluating GANs
  • Min-Max, Non-Saturating, and Wasserstein GANs
  • Using tips and tricks to best train a GAN
Note

When reading this chapter, please remember that GANs are notoriously hard to both train and evaluate. As with any other cutting-edge field, opinions about what is the best approach are always evolving.

Papers such as “How to Train Your DRAGAN” are a testament to both the incredible capacity of machine learning researchers to make bad jokes and the difficulty of training Generative Adversarial Networks well. Dozens of arXiv papers preoccupy themselves solely with the aim of improving the training of GANs, and numerous workshops have been dedicated to various aspects of training at top academic conferences (including Neural Information Processing Systems, or NIPS, one of the prominent machine learning conferences[1]).

1 NIPS 2016 featured a workshop on GAN training with many important researchers in the field, which this chapter was based on. NIPS has recently changed its abbreviation to NeurIPS.

5.1. Evaluation

5.1.1. Evaluation framework

5.1.2. Inception score

5.1.3. Fréchet inception distance

5.2. Training challenges

5.2.1. Adding network depth

5.2.2. Game setups

5.2.3. Min-Max GAN

5.2.4. Non-Saturating GAN

5.2.5. When to stop training

5.2.6. Wasserstein GAN

5.3. Summary of game setups

5.4. Training hacks

5.4.1. Normalizations of inputs

5.4.2. Batch normalization

5.4.3. Gradient penalties

5.4.4. Train the Discriminator more

5.4.5. Avoid sparse gradients

5.4.6. Soft and noisy labels

Summary

sitemap