Chapter 12. Utility landscape

This chapter covers

  • Implementing a neural network for ranking
  • Image embedding using VGG16
  • Visualizing utility

A household vacuuming robot, like the Roomba, needs sensors to “see” the world. The ability to process sensory input enables robots to adjust their model of the world around them. In the case of the vacuum cleaner robot, the furniture in the room may change day to day, so the robot must be able to adapt to chaotic environments.

Let’s say you own a futuristic housemaid robot, which comes with a few basic skills but also with the ability to learn new skills from human demonstrations. For example, maybe you’d like to teach it how to fold clothes.

Teaching a robot how to accomplish a new task is a tricky problem. Some immediate questions come to mind:

  • Should the robot simply mimic a human’s sequence of actions? Such a process is referred to as imitation learning.
  • How do a robot’s arms and joints match up to human poses? This dilemma is often referred to as the correspondence problem.

12.1. Preference model

12.2. Image embedding

12.3. Ranking images

12.4. Summary

12.5. What’s next?