5 Graph autoencoders
This chapter covers
- Distinguishing between discriminative and generative models
- Applying autoencoders and variational autoencoders to graphs
- Building graph autoencoders with PyTorch Geometric
- Over-squashing and graph neural networks
- Link prediction and graph generation
So far, we’ve covered how classical deep learning architectures can be extended to work on graph-structured data. In chapter 3, we considered convolutional graph neural networks (GNNs), which apply the convolutional operator to identify patterns within the data. In chapter 4, we explored the attention mechanism and how this can be used to improve performance for graph-learning tasks such as node classification.
Both convolutional GNNs and attention GNNs are examples of discriminative models, as they learn to discriminate between different instances of data, such as whether a photo is of a cat or a dog. In this chapter, we introduce the topic of generative models and explore them through two of the most common architectures, autoencoders and variational autoencoders (VAEs). Generative models aim to learn the entire dataspace rather than separating boundaries within the dataspace, as do discriminative models. For example, a generative model learns how to generate images of cats and dogs (learning to reproduce aspects of a cat or dog, rather than learning just the features that separates two or more classes, such as the pointed ears of a cat or the long ears of a spaniel).