7 Alternative connectivity patterns

 

This chapter covers

  • Understanding alternative connectivity patterns for deeper and wider layers
  • Increasing accuracy with feature map reuse, further refactoring convolutions, and squeeze-excitation
  • Coding alternatively connected models (DenseNet, Xception, SE-Net) with the procedural design pattern

So far, we’ve looked at convolutional networks with deep layers and convolutional networks with wide layers. In particular, we’ve seen how the corresponding connectivity patterns both between and within convolutional blocks addressed issues of vanishing and exploding gradients and the problem of memorization from overcapacity.

Those methods of increasing deep and wide layers, along with regularization (adding noise to reduce overfitting) at the deeper layers, reduced the problem with memorization but certainly did not eliminate it. So researchers explored other connectivity patterns within and between residual convolutional blocks to further reduce memorization without substantially increasing the number of parameters and compute operations.

We’ll cover three of those alternative connectivity patterns in this chapter: DenseNet, Xception, and SE-Net. These patterns all had similar goals: reducing compute complexity in the connectivity component. But they differed in their approaches to the problem. Let’s first get an overview of those differences. Then we’ll spend the rest of the chapter looking at the specifics of each pattern.

7.1 DenseNet: Densely connected convolutional neural network

7.1.1 Dense group

7.1.2 Dense block

7.1.3 DenseNet macro-architecture

7.1.4 Dense transition block

7.2 Xception: Extreme Inception

7.2.1 Xception architecture

7.2.2 Entry flow of Xception

7.2.3 Middle flow of Xception

7.2.4 Exit flow of Xception

7.2.5 Depthwise separable convolution

7.2.6 Depthwise convolution

7.2.7 Pointwise convolution