In this second part, you’ll learn how to design and code models using the procedural reuse design pattern. I will show you how simple and easy it is to apply procedural reuse, which is a fundamental principle in software engineering, to deep learning models. You’ll see how to decompose the model into its standard three components—stem, learner, and task—along with the interface between the components, and how to apply a procedural reuse pattern for coding each piece.
Next, you’ll see how to apply this design pattern to a variety of seminal state-of-the-art (SOTA) computer vision models as well as several examples from structured data and NLP. I’ll walk you through coding a progression of SOTA models, and cover their contributions to the development of deep learning: VGG, ResNet, ResNeXt, Inception, DenseNet, WRN, Xception, and SE-Net. Then we will turn our attention to mobile models for memory-constrained devices, such as a mobile phones or IoT sensors. We’ll look at the progression in design principles that were developed to make models run in memory-constrained devices, starting with MobileNet, then SqueezeNet and ShuffleNet. Again, we’ll code each of these mobile models with the procedural reuse design pattern, and then you’ll see how to deploy and serve these models using TensorFlow Lite.