11 Transfer learning

 

This chapter covers

  • Using prebuilt and pretrained models from TF.Keras and TensorFlow Hub
  • Performing transfer learning between tasks in similar and distinct domains
  • Initializing models with domain-specific weights for transfer learning
  • Determining when to reuse high-dimensionality or low-dimensionality latent space

TensorFlow and TF.Keras support a wide availability of prebuilt and pretrained models. Pretrained models can be used as is, while prebuilt models can be trained from scratch. By replacing the task group, pretrained models can also be reconfigured to perform any number of tasks. The process of replacing or reconfiguring the task group with retraining is called transfer learning.

In essence, transfer learning means transferring the knowledge for solving one task to solving another task. The benefit of transfer learning versus training a model from scratch is that the new task can be trained faster and with less data. Think of it as a form of reuse: we are reusing the model with its learned weights.

11.1 TF.Keras prebuilt models

11.1.1 Base model

11.1.2 Pretrained ImageNet models for prediction

11.1.3 New classifier

11.2 TF Hub prebuilt models

11.2.1 Using TF Hub pretrained models

11.2.2 New classifier

11.3 Transfer learning between domains

11.3.1 Similar tasks

11.3.2 Distinct tasks

11.3.3 Domain-specific weights

11.3.4 Domain transfer weight initialization

11.3.5 Negative transfer