12 Generative deep learning

 

This chapter covers:

  • Text generation
  • DeepDream
  • Neural style transfer
  • Variational autoencoders
  • Generative adversarial networks

The potential of artificial intelligence to emulate human thought processes goes beyond passive tasks such as object recognition and mostly reactive tasks such as driving a car. It extends well into creative activities. When I first made the claim that in a not-so-distant future, most of the cultural content that we consume will be created with substantial help from AIs, I was met with utter disbelief, even from long-time machine-learning practitioners. That was in 2014. Fast-forward three years, and the disbelief has receded—at an incredible speed. In the summer of 2015, we were entertained by Google’s DeepDream algorithm turning an image into a psychedelic mess of dog eyes and pareidolic artifacts; in 2016, we used the Prisma application to turn photos into paintings of various styles. In the summer of 2016, an experimental short movie, Sunspring, was directed using a script written by a Long Short-Term Memory with dialogue. Maybe you’ve recently listened to music that was tentatively generated by a neural network.

12.1 Text generation

12.1.1 A brief history of generative deep learning for sequence generation

12.1.2 How do you generate sequence data?

12.1.3 The importance of the sampling strategy

12.1.4 Implementing text generation with Keras

12.1.5 A text-generation callback with variable-temperature sampling

12.1.6 Wrapping up

12.2 DeepDream

12.2.1 Implementing DeepDream in Keras

12.2.2 Wrapping up

12.3 Neural style transfer

12.3.1 The content loss

12.3.2 The style loss

12.3.3 Neural style transfer in Keras

12.3.4 Wrapping up

12.4 Generating images with variational autoencoders

12.4.1 Sampling from latent spaces of images

12.4.2 Concept vectors for image editing

12.4.3 Variational autoencoders

12.4.4 Implementing a VAE with Keras

sitemap