9 Preventing overfitting: ridge regression, LASSO, and elastic net

 

This chapter covers:

  • What does overfitting look like for regression problems?
  • What is regularization?
  • What are ridge regression, LASSO, and elastic net?
  • What are the L1 and L2 norms and how are they used to shrink parameters?

Our societies are full of checks and balances. In our political systems, parties balance each other to (in theory) find solutions that are at neither extreme of each other’s views. Professional areas, such as financial services, have regulatory bodies that prevent them from doing wrong, and ensure the things they say and do are truthful and correct. When it comes to machine learning, it turns out we can apply our own form of regulation to the learning process to prevent the algorithms from overfitting the training set. We call this regulation in machine learning, regularization.

9.1  What is regularization?

9.2  What is ridge regression?

9.3  What is the L2 norm and how does ridge regression use it?

9.4  What is the L1 norm and how does LASSO use it?

9.5  What is elastic net?

9.6  Building our first ridge, LASSO, and elastic net models

9.6.1  Loading and exploring the Iowa dataset

9.6.2  Training the ridge regression model

9.6.3  Training the LASSO model

9.6.4  Training the elastic net model

9.7  Benchmarking ridge, LASSO, elastic net and OLS against each other

9.8  Strengths and weaknesses of ridge, LASSO and elastic net

9.9  Summary

9.10  Solutions to exercises