5 Transfer Learning: Reusing Pretrained Neural Networks
This chapter covers:
What transfer learning is and why it is better than training models from scratch for many types of problems
How to leverage the feature-extraction power of state-of-the-art pretrained convolutional neural networks (convnets) by converting them from Keras and importing them into TensorFlow.js
What SymbolicTensors are and how they help you achieve flexible “plug and play” of model components
Why you should let only some layers of the model update by freezing other layers during transfer learning
How to replace the output layer of a pretrained convnet with new output layers to solve different types of transfer-learning tasks and dataset
What is the fine-tuning technique is and how it helps you get more accurate models from transfer learning
How to use transfer learning to achieve object detection in TensorFlow.js
5.1 Introduction to transfer learning: Reusing pretrained models
5.1.1 Transfer learning based on compatible output shapes: Freezing layers
5.1.2 Transfer learning on incompatible output shapes: Creating a new model using outputs from the base model
5.1.3 Getting the most out of transfer-learning through fine-tuning: An audio example
5.2 Object detection through transfer learning on a convnet
5.2.1 A simple object detection problem based on synthesized scenes