2 Priors, likelihoods, and posteriors—What Bayes is all about: Combining intuition and data to update our knowledge

 

This chapter covers

  • Bayes’ theorem
  • Different types of priors in a Bayesian model
  • Computing the likelihoods of data
  • Representing updated belief via the posterior distribution

We have learned, on a high level, that Bayesian probability allows us to combine prior knowledge with data into a probability distribution called the posterior to represent our updated belief about an unknown quantity of interest. This procedure conveniently mirrors how we humans use data to make sense of the world around us.

We now dive into how a prior belief gets updated to the posterior using Bayes’ theorem, which powers all Bayesian models that we will be learning throughout this book. Bayes’ theorem—named after Reverend Thomas Bayes, who made the first attempt in formulating it—gives a mathematical formula on how to update the probability of a hypothesis given observed data. The formula is quite simple and easy to remember, but it is incredibly powerful and gives us a way to mathematically formulate and answer the ubiquitous question: “Given what I knew before, and what I have observed, how should I change my mind?”

What makes it all work: Bayes’ theorem

The formula and how it works

Visualizing Bayes’ theorem

One more time, will it rain today?

Testing for a rare disease

The effects of priors and likelihoods

A full Bayesian model: Do people like tea or coffee more?

The problem setup

The prior distribution: What you believe before seeing the data

The likelihood: How well each hypothesis explains the data

The posterior: Putting it all together

Working with the posterior probability

The Bayesian belief under increasing data

The effects of the prior

Summary