4 Graph Convolutional Networks (GCNs) & GraphSage
This chapter covers
- Introducing GraphSage and GCN and how they fit into the GNN universe
- Understanding convolution and how is it applied to graphs and graph learning
- Implementing convolutional GNNs in a node-prediction problem
In Part 1 of the book, we explored fundamental concepts related to graphs and graph representation learning. All of this served to set us up for Part 2, where we will explore distinct types of GNN architectures, including convolutional GNNs, Graph Attention Networks, and Graph Auto-Encoders.
In this chapter, our goal is to understand and apply Graph Convolutional Networks (GCN) [Kipf] and GraphSage [Hamilton]. These two architectures are part of a larger class of GNNs that are based on applying convolutions to graph data when doing deep learning. Convolutional operations are relatively common in deep learning models, particularly for image problems where the convolutional neural network (CNN) architecture has shown great performance. These operations can be understood as performing a spatial or local averaging. For example, in images, convolution neural networks form representations at incrementally larger pixel subdomains.