chapter thirteen

13 Using Segmentation To Find Suspected Nodules

 

This chapter covers:

  • Chapter 13 will implement step 2, segmentation.
  • a bulleted list
  • of high-level topics
  • that you teach
  • in the chapter

In the last four chapters, we have accomplished a lot. We’ve learned about CT scans and lung tumors, datasets and data loaders, and metrics and monitoring. We have also apoplied many of the things we learned in part 1, and have a working classifier. We are still operating in a somewhat artificial environment, however, since we still require hand-annotated nodule information to load into our classifier. We don’t have a good way of creating that input automatically. Plugging in overlapping 32x32x32 patches of data would result in 31x31x7=6727 patches per CT, or about 10x the number of annotated samples we have. We’d need to overlap the edges; our classifier expects the nodule to be centered, and even then the inconsistent positioning will probably present issues.

That’s not to say that it’s impossible to handle these issues! This book is going to be using the multiple-step approach, however, due to there being a large number of well-performing open source projects that handle it similarly. In addition, we feel that the multi-step approach allows for a more smooth introduction of new concepts, making the learning process easier.

13.1  Segmentation is per-pixel classification

13.1.1  The UNet architecture

13.1.2  An off-the-shelf model: adding UNet to our project

13.2  A 3D Dataset in 2D

13.2.1  UNet has very specific input size requirements

13.2.2  UNet in 3D would use too much RAM

13.2.3  Building the ground truth data

13.2.4  Implementing the Luna2dSegmentationDataset

13.3  Updating the training script

13.3.1  Getting images into tensorboard

13.3.2  Dice loss

13.3.3  Updating our metrics logging

13.3.4  Saving our model

13.4  Conclusion

13.5  Exercises

13.6  Summary