3 Variational Inference

 

This chapter covers

  • Introduction to KL Variational Inference
  • Mean-field approximation
  • Image Denoising in Ising Model
  • Mutual Information Maximization

In the previous chapter, we covered one of the two main camps of Bayesian Inference: Markov Chain Monte Carlo. We examined different sampling algorithms and approximated the posterior distribution using samples. In this chapter, we are going to look at the second camp of Bayesian Inference: Variational Inference. Variational Inference (VI) is an important class of approximate inference algorithms. The basic idea behind VI is to choose an approximate distribution q(x) from a family of tractable or easy-to-compute distributions with trainable parameters and then make this approximation as close as possible to the true posterior distribution p(x).

3.1 KL Variational Inference

 

3.2 Mean-Field

 
 
 

3.3 Image Denoising in Ising Model

 
 
 

3.4 MI Maximization

 
 
 

3.5 Exercises

 
 
 
 

3.6 Summary

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest