6 Dynamic graphs: Spatiotemporal GNNs

 

This chapter covers

  • Introducing memory into your deep learning models
  • Understanding the different ways to model temporal relations using graph neural networks
  • Implementing dynamic graph neural networks
  • Evaluating your temporal graph neural network models

So far, all of our models and data have been single snapshots in time. In practice, the world is dynamic and in constant flux. Objects can move physically, following a trajectory in front of our eyes, and we’re able to predict their future positions based on these observed trajectories. Traffic flow, weather patterns, and the spread of diseases across networks of people are all examples where more information can be gained when modeled with spatiotemporal graphs instead of static graphs.

Models that we build today might quickly lose performance and accuracy as we deploy them in the real world. These are problems intrinsic to any deep learning (and machine learning) model, known as out-of-distribution (OOD) generalization, that is, how well models generalize to entirely unseen data.

In this chapter, we consider how to make models that are suitable for dynamic events. While this doesn’t mean they can deal with OOD data, our dynamic models will be able to make predictions about unseen events in the future using the recent past.

6.1 Temporal models: Relations through time

6.2 Problem definition: Pose estimation

6.2.1 Setting up the problem

6.2.2 Building models with memory

6.3 Dynamic graph neural networks

6.3.1 Graph attention network for dynamic graphs

6.4 Neural relational inference

6.4.1 Encoding pose data

6.4.2 Decoding pose data using a GRU

6.4.3 Training the NRI model

6.5 Under the hood

6.5.1 Recurrent neural networks

6.5.2 Temporal adjacency matrices

6.5.3 Combining autoencoders with RNNs

6.5.4 Gumbel-Softmax

Summary