11 Advanced deep learning algorithms

 

This chapter covers

  • Variational autoencoders for time series anomaly detection
  • Mixture density networks using amortized variational inference
  • Attention and transformers
  • Graph neural networks
  • ML research, including deep learning

In the previous chapter, we looked at fundamental deep learning algorithms to help represent numerical, image, and text data. In this chapter, we will continue our discussion with advanced deep learning algorithms. The algorithms have been selected for their state-of-the-art performance architectures and a wide range of applications. We will investigate generative models based on variational autoencoders (VAEs) and implement, from scratch, an anomaly detector for time series data. We’ll continue our journey with an intriguing combination of neural networks and classical Gaussian mixture models (GMMs) via amortized variational inference (VI) and implement mixture density networks (MDNs). We will then focus on the concept of attention and implement a transformer architecture from scratch for a classification task. Finally, we’ll examine graph neural networks (GNNs) and use one to perform node classification for a citation graph. We will be using the Keras/TensorFlow deep learning library throughout this chapter.

11.1 Autoencoders

11.1.1 VAE anomaly detection in time series

11.2 Amortized variational inference

11.2.1 Mixture density networks

11.3 Attention and transformers

11.4 Graph neural networks

11.5 ML research: Deep learning

11.6 Exercises

Summary