2 Priors, likelihoods, and posteriors—What Bayes is all about: Combining intuition and data to update our knowledge
This chapter covers
- Bayes’ theorem
- Different types of priors in a Bayesian model
- Computing the likelihoods of data
- Representing updated belief via the posterior distribution
We have learned, on a high level, that Bayesian probability allows us to combine prior knowledge with data into a probability distribution called the posterior to represent our updated belief about an unknown quantity of interest. This procedure conveniently mirrors how we humans use data to make sense of the world around us.
We now dive into how a prior belief gets updated to the posterior using Bayes’ theorem, which powers all Bayesian models that we will be learning throughout this book. Bayes’ theorem—named after Reverend Thomas Bayes, who made the first attempt in formulating it—gives a mathematical formula on how to update the probability of a hypothesis given observed data. The formula is quite simple and easy to remember, but it is incredibly powerful and gives us a way to mathematically formulate and answer the ubiquitous question: “Given what I knew before, and what I have observed, how should I change my mind?”