This chapter covers:
- What generative deep learning is, its applications, and how it differs from the deep-learning tasks we’ve seen so far
- How to generate text using a recurrent neural network
- What latent space is and how it can form the basis of generating novel images, through the example of variational autoencoders (VAEs)
- The basics of generative adversarial networks (GANs)
Some of the most impressive tasks demonstrated by deep neural networks have involved generating images, sounds and texts that look or sound real. Nowadays deep neural networks are capable of creating highly realistic human face images[134], synthesizing natural-sounding speech[135], and composing compellingly coherent text[136], just to name a few achievements. Such generative models are useful for a number of reasons, for instance: aiding artistic creation, conditionally modifying existing content, and augmenting existing datasets to support other deep learning tasks[137].