11 Graph representation learning and graph neural network
This chapter covers
- Graph representation learning and its role in scaling machine learning on graphs
- How deep learning automates the feature engineering process
- The fundamentals of graph embeddings and their applications
- Introducing Graph Neural Networks (GNNs)
In chapters 9 and 10, we explored the fundamental concepts of machine learning on graphs, demonstrating how these techniques can solve complex tasks like node classification, link prediction, and community detection. We showed how manual feature engineering can effectively capture graph properties and relationships to power downstream machine learning tasks through carefully crafted examples and hands-on implementations. These approaches provided valuable insights into what makes graph-based machine learning work, offering complete transparency into how our models make decisions.
11.1 Embeddings in graph representation learning
11.1.1 Understanding graph embeddings: From discrete to continuous
11.1.2 Real-World applications and examples
11.2 The encoder-decoder model
11.2.1 The Encoder: Converting graph structure to vectors
11.2.2 The Decoder: Reconstructing graph properties
11.2.3 The power of the framework
11.2.4 Node2vec: an example of encoder-decoder framework
11.3 Shallow embeddings: a first approach to graph representation
11.3.1 Understanding shallow embeddings
11.3.2 Limitations of shallow embeddings
11.4 Embeddings in knowledge graphs
11.4.1 Loss function
11.4.2 Multi-Relationship decoder
11.5 Message passing and Graph Neural Networks (GNNs)
11.5.1 The message passing framework: a neural conversation
11.5.2 Motivation and intuition: why message passing works
11.5.3 The basic GNN model
11.5.4 Message passing with self-loops
11.6 Generalized aggregation and update methods
11.6.1 Neighborhood normalization
11.6.2 Neighborhood attention
11.6.3 Multi-head attention and transformer connections
11.6.4 Generalized update methods
11.7 The synergy of GNNs and LLMs
11.8 Summary
11.9 References