## Thursday, January 31, 2008

### Solving algorithmic problem

There are some general techniques that I've found quite useful to come up with a solution for an algorithmic problem.

Understand the problem
1. Make sure the problem is well defined and understood. Try to rephrase the problem in different ways to double verify the understanding is correct.
2. Understand the constraints and optimization goals of the expected solution. There will be many conflicting dimensions of a solution (e.g. performance, security, scalability ... etc), knowing what can be traded off is important.
Design the solution
1. While you are thinking about the solution, try to make it visual. e.g. Sketch the solution on a whiteboard to help you think.
2. See if you can solve the problem by using a simple-minded, brute force search. Try all the possible combinations of solution to determine which one works. But get a sense of the BigO complexity analysis to judge whether the brute-force approach is feasible.
3. See if you translate the problem into a search problem. Then you can utilize existing search algorithm (e.g. hill climbing, A*, BFS, DFS ... etc) to help.
4. By exploit the nature of the problem, see if you can define your solution recursively. Assume that you already have a solution with the problem of the smaller size, can you use that to construct the current solution. e.g. Can you define F(n) based on F(n-1), F(n-2) .... etc. And also try to see if you can translate the recursive solution into an iterative one.
5. At this point, you may have found multiple solutions. Do a BigO analysis in both time and space and pick the one that aligns with your optimization goal.
Verify the solution
1. Walk through some simple examples to test out your solution, include some boundary cases in the examples to see how your solution handle those corner cases.
2. Prototype your solution. Define some measurement and instrument your prototype. Run it and compare the result with what you expect.

## Friday, January 25, 2008

### Recursion vs Iteration

Recursion is a very commonly used technique to express mathematical relationship as well as implement computer algorithm. Define a function recursively usually result in very concise form and ease the understanding. However, recursive function is not performance friendly in terms of stack growth and processing time.

Here is a general form of a recursive function ...

``def f(n)  if (n <= j)    return known_value[n]  else    return g(f(n-1), f(n-2), ... f(n-j))``

When this function is invoked, the stack size will grow linearly O(n) and the processing time will grow exponentially O(j**n)

However, the above function can be rewritten into an iteration form. The idea is to revert the order of execution, moving from the known points to the unknown values.

``def f(n)  if (n <= j)    return known_value[n]  for i in 0..j    cache[i] = known_value[i]  for i in (j+1) .. n    cache[i] = g(cache[i-1], cache[i-2], ... cache[i-j])  return cache[n]``

In this iteration form, the stack size will not grow and the processing time will grow linearly O(n). Therefore, performance gains significantly when the recursive form is rewrittened in the iteration form.

## Thursday, January 24, 2008

### A* Search

Given a start state S and a Goal state G, find an optimal path S -> A -> B -> ... -> G

We can assign a cost to each arc so the optimal path is represented with the lowest sum of cost. This problem can be modeled as a weighted Graph where nodes represent a state and cost associated arcs represent the transition from one state to another.

A* search is one of the most popular search algorithm and it has the following characteristics
1. If there is a path existing between S to G, it can always find one
2. It will always find the one that is optimal (least cost)
Here is the description of the algorithm
• Define a function: f(X) = g(X) + h(X) where ...
• g(X) = minimal cost path from A to X
• h(X) = a heuristic estimation cost from X to S. It is important that the estimated cost h(X) is less than the actual cost
1. The algorithm keep track of a list of partially explored path, with an associated f(X) within a priority queue as well as a list of explored nodes. Initially the priority queue contain just one path [S] with cost C and the visited node list contains just S.
2. Repeat the following ...
3. Pick the lowest cost path ... based on f(X) from the priority queue to explore
4. Set current_node to be the last node of the lowest cost path
5. If current_node is explored already, skip this node and go back to step 3
6. If current_node is the goal, then done. This is the optimal path
7. Otherwise, find out all the neighbours of the current_node. For each of them, insert a path that leads to them into the priority queue.

``def A_Search(S, G) visited_nodes = [S] priorityQ.add([] << S) while priorityQ.non_empty   lowest_cost_path = priorityQ.first  # according to f(X)   node_to_explore = lowest_cost_path.last_node   continue if visited_nodes contains node_to_explore   if node_to_explore == G     return lowest_cost_path   else     add node_to_explore to visited_nodes     for each v in node_to_explore.neighbors       new_path = lowest_cost_path << v       insert new_path to priorityQ     end   end end return nil # No solutionend``

Proof: A* Search can always find a path if it exist

This should be obvious because the while loop will only terminate if the Goal is reached or all connected node has been explored

Proof: A* Search will find the path with least cost

Lets say the algorithm find a path S -> A -> K -> ... -> G which is not the optimal path
S -> A -> B -> ... -> G

That means, in the last extraction from the priorityQ, f(G) is smaller than f(B)

This is not possible because ...
f(G) == cost(S->A->K...G) > cost (S->A->B...G) > g(B) + h(B) == f(B)