This chapter covers
- Using pretrained word embeddings in a semisupervised fashion to transfer pretrained knowledge to a problem
- Using pretrained embeddings of larger sections of text in a semisupervised fashion to transfer pretrained knowledge to a problem
- Using multitask learning to develop better-performing models
- Modifying target domain data to reuse knowledge from a resource-rich source domain
In this chapter, we will cover some prominent shallow transfer learning approaches and concepts. This allows us to explore some major themes in transfer learning, while doing so within the context of relatively simple models in the class of eventual interest—shallow neural networks. Several authors have suggested various classification systems for categorizing transfer learning methods into groups.1,2,3 Roughly speaking, categorization is based on whether transfer occurs between different languages, tasks, or data domains. Each of these types of categorization is usually correspondingly referred to as cross-lingual learning, multitask learning, and domain adaptation, as visualized in figure 4.1.