chapter three

3 Conjugate priors for closed-form Bayesian updates: Skipping over heavy computations with clever math

 

This chapter covers

  • Conjugate priors in Bayesian models
  • Modeling a proportion with the beta–binomial conjugate pair
  • Modeling a real-valued quantity with the normal–normal conjugate pair

In chapter 2, we built a Bayesian model for preferences among tea and coffee, where we rounded the unknown proportion of coworkers who prefer tea over coffee 𝑝 to one decimal place. This discretization allowed us to directly apply the Bayes’ theorem to the finite set of competing hypotheses that model our uncertainty about 𝑝, but it ultimately yields an imprecise model. Wouldn’t it be nice if we could implement the same Bayesian procedure on a continuous parameter space, so that we could account for all values of 𝑝 between 0 and 1 (like 0.63432 or 0.34269), without discretizing or some other form of approximation?

In this chapter, we will learn to do just that, by building a more general model for 𝑝 using continuous probability distributions. This might be a demanding endeavor, however, as continuous probability distributions mean that we’re dealing with an infinite number of competing hypotheses, complicating the math of the Bayes’ theorem we saw in chapter 2.

Revisiting tea vs. coffee with the beta–binomial model

Specifying a continuous prior distribution

Working with a continuous prior distribution

Using the same likelihood for the data

The difficulty of computing the posterior

Introducing conjugate priors

Beta priors to model proportions

Working with a continuous posterior distribution

The effects of the prior

Linear regression the Bayesian way

Accounting for all possible lines

Using a normal prior for a real-valued quantity

The likelihood for the linear regression model

Working with the posterior linear regression model

Working with posterior predictions

Extensions of Bayesian linear regression

Further resources on lookup tables for conjugate priors

Summary