chapter four
4 Testing what we assume to know: Neyman, Pearson, and the principles of hypothesis testing
This chapter covers
- Jerzy Neyman and Egon Pearson’s On the Problem of the Most Efficient Tests of Statistical Hypotheses (1933) and their stepwise hypothesis testing procedure
- The hypothesis testing procedure, from formulating null and alternative hypotheses to drawing conclusions—anchored in Neyman and Pearson’s structured framework
- How hypothesis testing separates true signals from random noise in fields such as medicine, safety, and beyond
- The stakes of error—why false positives and false negatives matter, and why balancing them remains critical in science, business, and AI
- Modern applications of hypothesis testing in statistics, data science, and machine learning, from A/B testing to model evaluation
By the early 1930s, the foundations of modern inference were in motion. Bayes had provided a way to update belief in light of new evidence. Fisher had given statisticians powerful tools for estimation and even introduced the notion of significance testing, yet his approach left ambiguity around how to balance different types of error and how to translate test results into systematic decisions. In On the Problem of the Most Efficient Tests of Statistical Hypotheses, Jerzy Neyman and Egon Pearson offered an answer: a framework that defined hypothesis testing as a structured and systematic process of decision-making under uncertainty.