Tuesday, January 23, 2007

Dynamic Programming Algorithms

Notes on dynamic programming, from CLR Algorithms Ch 16.

Key points:
  • Solves problems by combining solutions to subproblems.
  • Requires that the problem possess optimal substructure.
  • Differs from divide-and-conquer in that subproblems are overlapping.
  • Can use table lookup (memoization) to avoid solving a subproblem more than once.
    • Memoization allows for a top-down strategy, instead of DP's usual bottom-up approach.
  • DP is typically used for optimization problems, where a minimum or maximum value is sought.
Four steps to developing a dynamic programming algorithm:
  1. Describe the structure of an optimal solution.
  2. Recursively define the value of an optimal solution.
  3. Compute the value of an optimal solution in bottom-up fashion.
  4. Construct an optimal solution from the computed information.
Textbook examples where dynamic programming solutions apply:
  • Matrix-chain multiplication
  • Longest common subsequence of two sequences
  • Optimal triangulation of a convex polygon
CLR says: "In general practice, if all subproblems must be solved at least once, a bottom-up DP algorithm usually outperforms a top-down memoized algorithm by a constant factor, because there is no overhead for recursion and less overhead for maintaining the table. Moreover, there are some problems for which the regular pattern of table accesses in the DP algorithm can be exploited to reduce time or space requirements even further. Alternatively if some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of only solving those subproblems that are definitely required."