12 Generative deep learning

 

This chapter covers

  • Text generation
  • DeepDream
  • Neural style transfer
  • Variational autoencoders
  • Generative adversarial networks

The potential of artificial intelligence to emulate human thought processes goes beyond passive tasks such as object recognition and mostly reactive tasks such as driving a car. It extends well into creative activities. When I first made the claim that in a not-so-distant future, most of the cultural content that we consume will be created with substantial help from AIs, I was met with utter disbelief, even from long-time machine learning practitioners. That was in 2014. Fast-forward a few years, and the disbelief had receded at an incredible speed. In the summer of 2015, we were entertained by Google’s DeepDream algorithm turning an image into a psychedelic mess of dog eyes and pareidolic artifacts; in 2016, we started using smartphone applications to turn photos into paintings of various styles. In the summer of 2016, an experimental short movie, Sunspring, was directed using a script written by a Long Short-Term Memory. Maybe you’ve recently listened to music that was tentatively generated by a neural network.

12.1 Text generation

 
 
 

12.1.1 A brief history of generative deep learning for sequence generation

 
 

12.1.2 How do you generate sequence data?

 
 
 

12.1.3 The importance of the sampling strategy

 

12.1.4 Implementing text generation with Keras

 

12.1.5 A text-generation callback with variable-temperature sampling

 
 
 
 

12.1.6 Wrapping up

 
 
 
 

12.2 DeepDream

 
 
 

12.2.1 Implementing DeepDream in Keras

 

12.2.2 Wrapping up

 
 
 
 

12.3 Neural style transfer

 
 
 

12.3.1 The content loss

 

12.3.2 The style loss

 
 
 

12.3.3 Neural style transfer in Keras

 
 
 

Summary

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage