9 ULMFiT and knowledge distillation adaptation strategies

 

This chapter covers

  • Implementing the strategies of discriminative fine-tuning and gradual unfreezing
  • Executing knowledge distillation between teacher and student BERT models

In this chapter and the following chapter, we will cover some adaptation strategies for the deep NLP transfer learning modeling architectures that we have covered so far. In other words, given a pretrained architecture such as ELMo, BERT, or GPT, how can we carry out transfer learning more efficiently? We can employ several measures of efficiency here. We choose to focus on parameter efficiency, where the goal is to yield a model with the fewest parameters possible while suffering minimal reduction in performance. The purpose of this is to make the model smaller and easier to store, which would make it easier to deploy on smartphone devices, for instance. Alternatively, smart adaptation strategies may be required just to get to an acceptable level of performance in some difficult transfer cases.

9.1 Gradual unfreezing and discriminative fine-tuning

9.1.1 Pretrained language model fine-tuning

9.1.2 Target task classifier fine-tuning

9.2 Knowledge distillation

9.2.1 Transfer DistilmBERT to monolingual Twi data with pretrained tokenizer

Summary