14 Latent space and generative modeling, autoencoders, and variational autoencoders
This chapter covers
- Representing inputs with latent vectors
- Geometrical view, smoothness, continuity, and regularization for latent spaces
- PCA and linear latent spaces
- Autoencoders and reconstruction loss
- Variational autoencoders (VAEs) and regularizing latent spaces
Mapping input vectors to a transformed space is often beneficial in machine learning. The transformed vector is called a latent vector—latent because it is not directly observable—while the input is the underlying observed vector. The latent vector (aka embedding) is a simpler representation of the input vector where only features that help accomplish the ultimate goal (such as estimating the probability of an input belonging to a specific class) are retained, and other features are forgotten. Typically, the latent representation has fewer dimensions than the input: that is, encoding an input into a latent vector results in dimensionality reduction.