Up to now, we’ve focused on networks with deeper layers, block layers, and shortcuts in residual networks for image-related tasks such as classification, object localization, and image segmentation. Now we are going to take a look at networks with wide, rather than deep, convolutional layers. Starting in 2014 with Inception v1 (GoogLeNet) and 2015 with ResNeXt and Inception v2, neural network designs moved into wide layers, reducing the need for going deeper in layers. Essentially, a wide-layer design means having multiple convolutions in parallel and then concatenating their outputs. In contrast, deeper layers have sequential con-volutions and aggregate their outputs.