8 Deep Learning for Sequences and Text
- How sequential data differs from non-sequential data
- What deep-learning techniques are suitable for problems that involve sequential data
- How to represent text data in deep learning, including one-hot encoding, multi-hot encoding and word embedding
- What recurrent neural networks (RNNs) are and why they are suitable for sequential problems
- What 1D convolution is and why it is an attractive alternative to RNNs
- The unique properties of sequence-to-sequence (seq2seq) tasks and how to use the attention mechanism to solve them
8.1 Second attempt at weather prediction: Introducing Recurrent Neural Networks
8.1.1 Why dense layers fail to model sequential order
8.1.2 How recurrent neural networks model sequential order
8.2. Building deep learning models for text
8.2.1. How text is represented in machine learning: One-hot and multi-hot encoding
8.2.2. First attempt at the sentiment analysis problem
8.2.3. A more efficient representation of text: Word embeddings
8.2.4. 1D convolutional neural networks
8.3. Sequence-to-sequence tasks with attention mechanism
8.3.1 Formulation of the sequence-to-sequence task
8.3.2 The encoder-decoder architecture and the attention mechanism
8.3.3 Deep dive into the attention-based encoder-decoder model
8.5 Materials for further reading