Tuesday, January 23, 2007

Greedy Algorithms

A greedy algorithm is very similar to a dynamic programming algorithm. Both require the problem to have optimal substructure. However, whereas DP also needs overlapping subproblems, a greedy algorithm must have the opposite -- subproblems are not interdependent. Furthermore, the subproblems must be structured in such a way that making a locally optimal decision at each step in constructing the solution always leads to the globally optimal solution. This is known as the greedy-choice property. To know that a greedy algorithm will correctly solve a problem, you generally need to prove that the problem has the greedy-choice property. According to CLR, that proof can sometimes require "some cleverness."

The textbook examples are:
  • Activity Scheduling
  • Fractional Knapsack Problem (but not 0-1 Knapsack Problem)
  • Huffman Codes
  • Dijkstra's Algorithm

Dynamic Programming Algorithms

Notes on dynamic programming, from CLR Algorithms Ch 16.

Key points:
  • Solves problems by combining solutions to subproblems.
  • Requires that the problem possess optimal substructure.
  • Differs from divide-and-conquer in that subproblems are overlapping.
  • Can use table lookup (memoization) to avoid solving a subproblem more than once.
    • Memoization allows for a top-down strategy, instead of DP's usual bottom-up approach.
  • DP is typically used for optimization problems, where a minimum or maximum value is sought.
Four steps to developing a dynamic programming algorithm:
  1. Describe the structure of an optimal solution.
  2. Recursively define the value of an optimal solution.
  3. Compute the value of an optimal solution in bottom-up fashion.
  4. Construct an optimal solution from the computed information.
Textbook examples where dynamic programming solutions apply:
  • Matrix-chain multiplication
  • Longest common subsequence of two sequences
  • Optimal triangulation of a convex polygon
CLR says: "In general practice, if all subproblems must be solved at least once, a bottom-up DP algorithm usually outperforms a top-down memoized algorithm by a constant factor, because there is no overhead for recursion and less overhead for maintaining the table. Moreover, there are some problems for which the regular pattern of table accesses in the DP algorithm can be exploited to reduce time or space requirements even further. Alternatively if some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of only solving those subproblems that are definitely required."

Monday, January 22, 2007

class Enum<K extends Enum<K>>

In Java, the java.lang.Enum class is declared as:

public abstract class Enum<E extends Enum<E>>

Why is the type parameter declared as <E extends Enum<E>>? What would happen if it were declared as <E extends Enum> instead?
Solution


This pattern is used when a generic base class needs to reference the type of the subclass. For example, suppose we had:

public enum CardSuit { HEARTS, CLUBS, DIAMONDS, SPADES }

This gets compiled as something like:

public class CardSuit extends Enum<CardSuit>....

which then means that every occurrence of type parameter E in Enum's declaration becomes CardSuit. Thus, the superclass (Enum) is able to reference subclass types in its method signatures and return types this way through the type parameter.

If just <E extends Enum> were used instead, that would still be the case, but some information would be lost. The type parameter E is mentioned in three places in Enum:
  1. implements Comparable<E>
  2. public int compareTo(E o)
  3. public Class<E> getDeclaringClass()
If the class declaration just had E extends Enum, then the Comparable interface is only aware of Comparable<Enum> instead of Comparable<CardSuit>. So E extends Enum<E> contains some extra information, making it more typesafe.