16 Recurrent neural networks

 

This chapter covers

  • Understanding the components of a recurrent neural network
  • Designing a predictive model of time-series data
  • Using the time-series predictor on real-world data

Back in school, I remember my sigh of relief when one of my midterm exams consisted of only true-or-false questions. I can’t be the only one who assumed that half the answers would be true and the other half would be false.

I figured out answers to most of the questions and left the rest to guessing. But that guessing was based on something clever, a strategy that you might have employed as well. After counting my number of true answers, I realized that a disproportionate number of false answers were lacking. So a majority of my guesses were false to balance the distribution.

It worked. I sure felt sly in the moment. What is this feeling of craftiness that makes us feel so confident in our decisions, and how can we give a neural network the same power?

One answer is to use context to answer questions. Contextual cues are important signals that can improve the performance of machine-learning algorithms. Suppose that you want to examine an English sentence and tag the part of speech of each word (a problem that may be more familiar to you after chapter 10).

16.1 Introduction to RNNs

16.2 Implementing a recurrent neural network

16.3 Using a predictive model for time-series data

16.4 Applying RNNs

Summary