6 Applying active learning to different machine learning tasks

 

This chapter covers

  • Calculating uncertainty and diversity for object detection
  • Calculating uncertainty and diversity for semantic segmentation
  • Calculating uncertainty and diversity for sequence labeling
  • Calculating uncertainty and diversity for language generation
  • Calculating uncertainty and diversity for speech, video, and information retrieval
  • Choosing the right number of samples for human review

In chapters 3, 4, and 5, the examples and algorithms focused on document-level or image-level labels. In this chapter, you will learn how the same principles of uncertainty sampling and diversity sampling can be applied to more complicated computer vision tasks such as object detection and semantic segmentation (pixel labeling) and more complicated natural language processing (NLP) tasks such as sequence labeling and natural language generation. The general principles are the same, and in many cases, there is no change at all. The biggest difference is how you sample the items selected by active learning, and that will depend on the real-world problem that you are trying to solve.

6.1 Applying active learning to object detection

6.1.1 Accuracy for object detection: Label confidence and localization

6.1.2 Uncertainty sampling for label confidence and localization in object detection

6.1.3 Diversity sampling for label confidence and localization in object detection

6.1.4 Active transfer learning for object detection

6.1.5 Setting a low object detection threshold to avoid perpetuating bias

6.1.6 Creating training data samples for representative sampling that are similar to your predictions

6.1.7 Sampling for image-level diversity in object detection

6.1.8 Considering tighter masks when using polygons

6.2 Applying active learning to semantic segmentation

sitemap