Chapter 11. Preventing overfitting with ridge regression, LASSO, and elastic net
This chapter covers
- Managing overfitting in regression problems
- Understanding regularization
- Using the L1 and L2 norms to shrink parameters
Our societies are full of checks and balances. In our political systems, parties balance each other (in theory) to find solutions that are at neither extreme of each other’s views. Professional areas, such as financial services, have regulatory bodies to prevent them from doing wrong and ensure that the things they say and do are truthful and correct. When it comes to machine learning, it turns out we can apply our own form of regulation to the learning process to prevent the algorithms from overfitting the training set. We call this regulation in machine learning regularization.
In this section, I’ll explain what regularization is and why it’s useful. Regularization (also sometimes called shrinkage) is a technique that prevents the parameters of a model from becoming too large and “shrinks” them toward 0. The result of regularization is models that, when making predictions on new data, have less variance.