10 Basics of Deep Reinforcement Learning
This chapter covers:
- How reinforcement learning (RL) differs from supervised learning visited in the previous chapters
- The basic paradigm of reinforcement learning: agent, environment, action, and reward, and the interactions between them
- The general ideas behind two major approaches to solving RL problems: policy-based and value-based methods
- Policy-based RL algorithm through example: using the policy gradients (PG) method to solve the cart-pole problem
- Q value-based RL algorithm through example: using a deep Q-network (DQN) to solve the snake game.
Up to this point in this book, we have focused primarily on a type of machine learning called supervised learning. In supervised learning, we train a model to give us the correct answer given an input. Whether it’s assigning a class label to an input image (Chapter 4) or predicting future temperature based on past weather data (Chapter 8), the paradigm is the same: mapping a static input to a static output. The sequence-generating models we visited in Chapters 8 and 9 were slightly more complicated in that the output is a sequence of items, instead of a single one. But those problems can still be reduced to one-input-one-output mapping by breaking the sequences into steps.