3 Variational Inference
This chapter covers
- Introduction to KL Variational Inference
- Mean-field approximation
- Image Denoising in Ising Model
- Mutual Information Maximization
In the previous chapter, we covered one of the two main camps of Bayesian Inference: Markov Chain Monte Carlo. We examined different sampling algorithms and approximated the posterior distribution using samples. In this chapter, we are going to look at the second camp of Bayesian Inference: Variational Inference. Variational Inference (VI) is an important class of approximate inference algorithms. The basic idea behind VI is to choose an approximate distribution q(x)
from a family of tractable or easy-to-compute distributions with trainable parameters and then make this approximation as close as possible to the true posterior distribution p(x).