14 Latent space and generative modeling, autoencoders, and variational autoencoders

 

This chapter covers

  • Representing inputs with latent vectors
  • Geometrical view, smoothness, continuity, and regularization for latent spaces
  • PCA and linear latent spaces
  • Autoencoders and reconstruction loss
  • Variational autoencoders (VAEs) and regularizing latent spaces

Mapping input vectors to a transformed space is often beneficial in machine learning. The transformed vector is called a latent vector—latent because it is not directly observable—while the input is the underlying observed vector. The latent vector (aka embedding) is a simpler representation of the input vector where only features that help accomplish the ultimate goal (such as estimating the probability of an input belonging to a specific class) are retained, and other features are forgotten. Typically, the latent representation has fewer dimensions than the input: that is, encoding an input into a latent vector results in dimensionality reduction.

14.1 Geometric view of latent spaces

14.2 Generative classifiers

14.3 Benefits and applications of latent-space modeling

14.4 Linear latent space manifolds and PCA

14.4.1 PyTorch code for dimensionality reduction using PCA

14.5 Autoencoders

14.5.1 Autoencoders and PCA

14.6 Smoothness, continuity, and regularization of latent spaces

14.7 Variational autoencoders

14.7.1 Geometric overview of VAEs

14.7.2 VAE training, losses, and inferencing

14.7.3 VAEs and Bayes’ theorem

14.7.4 Stochastic mapping leads to latent-space smoothness

14.7.5 Direct minimization of the posterior requires prohibitively expensive normalization

14.7.6 ELBO and VAEs

14.7.7 Choice of prior: Zero-mean, unit-covariance Gaussian

14.7.8 Reparameterization trick

Summary

sitemap