Regularization in Deep Learning MEAP V07 cover
welcome to this free extract from
an online version of the Manning book.
to read more



Thank you for purchasing the MEAP edition of Regularization in Deep Learning.

One of the most important goals in building machine learning and especially deep learning models is to achieve good generalization performance in the test dataset. The training task is considered to be completed when we have obtained a generalizable model, often with the help of proper regularization in the training process. While the theory of generalization still remains a mystery, it is an active research area with new insights being proposed.

Currently, there are quite a number of regularization techniques that have proved to be empirically effective in a specific training context. However, these resources are often scrambled and disconnected. This book intends to bridge the gap by offering a systematic and well-illustrated perspective on different regularization techniques, covering data, model, cost function, and optimization procedure. It even goes one step further by mixing the most recent research breakthroughs with practical coding examples on regularization in deep learning models.

This book entertains this complex and ever-growing topic in a unique way. It introduces minimal mathematics and technical concepts in a well-illustrated manner and provides practical examples and code walkthroughs offered via a step-by-step fashion. The teaching is designed to be intuitive, natural, and progressive, instead of forcing in a particular concept.