Chapter 9. Dynamic programming
- You learn dynamic programming, a technique to solve a hard problem by breaking it up into subproblems and solving those subproblems first.
- Using examples, you learn to how to come up with a dynamic programming solution to a new problem.
Let’s revisit the knapsack problem from chapter 8. You’re a thief with a knapsack that can carry 4 lb of goods.
What items should you steal so that you steal the maximum money’s worth of goods?
The simplest algorithm is this: you try every possible set of goods and find the set that gives you the most value.
This works, but it’s really slow. For 3 items, you have to calculate 8 possible sets. For 4 items, you have to calculate 16 sets. With every item you add, the number of sets you have to calculate doubles! This algorithm takes O(2^n) time, which is very, very slow.
That’s impractical for any reasonable number of goods. In chapter 8, you saw how to calculate an approximate solution. That solution will be close to the optimal solution, but it may not be the optimal solution.
So how do you calculate the optimal solution?
Answer: With dynamic programming! Let’s see how the dynamic-programming algorithm works here. Dynamic programming starts by solving subproblems and builds up to solving the big problem.