chapter three

3 Graph Embeddings

 

This chapter covers

  • Understanding graph embeddings and their limitations
  • Using transductive and inductive techniques to create node embeddings
  • Creating node embeddings with the example dataset

In this chapter, we’re going to discuss how to take our graph from Chapter 2 and perform graph embeddings. Graph embeddings are low-dimensional representations that can be generated for entire graphs, sub-graphs, nodes, and edges. They are central to graph-based learning and can be generated in many different ways, including with graph algorithms, linear algebra methods, and GNNs.

Importantly, embeddings are inherent to the architecture of a GNN. This is because embeddings are constructed with each message passing step, which is equivalent to a pass through a layer of the neural network. In many other machine learning algorithms, embeddings are separated from the model training and can be used as a form of dimension reduction before other tasks like regression or classification. With GNNs, embedding and the model tasks are performed simultaneously when training the model.

3.1 Graph Representations

3.1.1 Overview of Embeddings

3.1.2 Overview of Embeddings

3.1.3 Node Similarity or Context

3.1.4 Transductive and Inductive Methods

3.2 Transductive Embedding Technique: Node2Vec

3.2.1 Random walks across graphs

3.2.2 Optimization

3.2.3 Implementations and Uses of Node2Vec

3.3 Inductive Embedding Technique: GNN

3.3.1 Traits of Inductive Embeddings

3.3.2 Message Passing as Deep Learning

3.3.3 Using Pytorch Geometric

3.3.4 Our Process/Pipeline

3.4 Summary

3.5 References