5 Baby steps with neural networks (perceptrons and backpropagation)

 

This chapter covers

  • Learning the history of neural networks
  • Stacking perceptrons
  • Understanding backpropagation
  • Seeing the knobs to turn on neural networks
  • Implementing a basic neural network in Keras

In recent years, a lot of hype has developed around the promise of neural networks and their ability to classify and identify input data, and more recently the ability of certain network architectures to generate original content. Companies large and small are using them for everything from image captioning and self-driving car navigation to identifying solar panels from satellite images and recognizing faces in security camera videos. And luckily for us, many NLP applications of neural nets exist as well. While deep neural networks have inspired a lot of hype and hyperbole, our robot overlords are probably further off than any clickbait cares to admit. Neural networks are, however, quite powerful tools, and you can easily use them in an NLP chatbot pipeline to classify input text, summarize documents, and even generate novel works.

5.1 Neural networks, the ingredient list

5.1.1 Perceptron

5.1.2 A numerical perceptron

5.1.3 Detour through bias

5.1.4 Let’s go skiing—the error surface

5.1.5 Off the chair lift, onto the slope

5.1.6 Let’s shake things up a bit

5.1.7 Keras: Neural networks in Python

5.1.8 Onward and deepward

5.1.9 Normalization: input with style

Summary