front matter

 

preface

Over the past couple of years, it has become increasingly difficult to ignore the breakneck speed at which the field of natural language processing (NLP) has been progressing. Over this period, you have likely been bombarded with news articles about trending NLP models such as ELMo, BERT, and more recently GPT-3. The excitement around this technology is warranted, because these models have enabled NLP applications we couldn’t imagine would be practical just three years prior, such as writing production code from a mere description of it, or the automatic generation of believable poetry and blogging.

A large driver behind this advance has been the focus on increasingly sophisticated transfer learning techniques for NLP models. Transfer learning is an increasingly popular and exciting paradigm in NLP because it enables you to adapt or transfer the knowledge acquired from one scenario to a different scenario, such as a different language or task. It is a big step forward for the democratization of NLP and, more widely, artificial intelligence (AI), allowing knowledge to be reused in new settings at a fraction of the previously required resources.

acknowledgments

about this book

Who should read this book?

Road map

Software requirements

About the code

liveBook discussion forum

about the author

about the cover illustration