3 Graph Embeddings
This chapter covers
- Understanding graph embeddings and their limitations
- Using transductive and inductive techniques to create node embeddings
- Creating node embeddings with the example dataset
In this chapter, we’re going to discuss how to take our graph from Chapter 2 and perform graph embeddings. Graph embeddings are low-dimensional representations that can be generated for entire graphs, sub-graphs, nodes, and edges. They are central to graph-based learning and can be generated in many different ways, including with graph algorithms, linear algebra methods, and GNNs.
Importantly, embeddings are inherent to the architecture of a GNN. This is because embeddings are constructed with each message passing step, which is equivalent to a pass through a layer of the neural network. In many other machine learning algorithms, embeddings are separated from the model training and can be used as a form of dimension reduction before other tasks like regression or classification. With GNNs, embedding and the model tasks are performed simultaneously when training the model.