Chapter 10. Recurrent neural networks

 

This chapter covers

  • Understanding the components of a recurrent neural network
  • Designing a predictive model of time-series data
  • Using the time-series predictor on real-world data

10.1. Contextual information

Back in school, I remember my sigh of relief when one of my midterm exams consisted of only true-or-false questions. I can’t be the only one who assumed that half the answers would be True and the other half would be False.

I figured out answers to most of the questions and left the rest to random guessing. But that guessing was based on something clever, a strategy that you might have employed as well. After counting my number of True answers, I realized that a disproportionate number of False answers were lacking. So, a majority of my guesses were False to balance the distribution.

It worked. I sure felt sly in the moment. What exactly is this feeling of craftiness that makes us feel so confident in our decisions, and how can we give a neural network the same power?

One answer is to use context to answer questions. Contextual cues are important signals that can improve the performance of machine-learning algorithms. For example, imagine you want to examine an English sentence and tag the part of speech of each word.

10.2. Introduction to recurrent neural networks

10.3. Implementing a recurrent neural network

10.4. A predictive model for time-series data

10.5. Application of recurrent neural networks

10.6. Summary