chapter five
5 State-of-the-art in deep learning: Transformers
This chapter covers,
- Understanding the transformer model architecture at a high level (e.g. different layers used to build transformer models)
- Implementing various sub-layers of the Transformer model using the Keras Sub-classing API to reuse later
- The concept of attention in Transformers, the importance of the self-attention mechanism in Transformers and how it compares to previously seen RNNs
- Implementing a basic small-scale transformer
We have seen many different deep learning models so far. Namely, fully-connected networks, convolution neural networks and recurrent neural networks. We used a full-connected network to reconstruct corrupted images, a convolution neural network to classify vehicles from other images and finally a RNN to predict future CO2 concentration values. In this chapter we are going to talk about a new type of model known as the Transformer.