A brief note on Dynamic Programming

Those who cannot remember the past are condemned to repeat it.

 – Dynamic Programming

Dynamic programming is all about remembering answers to the problems you have already solved. In dynamic programming, we need to break up a problem into a series of overlapping subproblems and build up solutions to larger and larger subproblems. If you are given a problem, which can be broken down into smaller subproblems, and these smaller subproblems, can still be broken into smaller ones, and if you manage to find out that there are some overlapping subproblems, then you have encountered a DP problem.

Some famous Dynamic Programming algorithms are:

  • Unix diff for comparing two files.
  • Bellman-Ford for the shortest path routing in networks.
  • Tex the ancestor of LaTex.
  • WASP – Winning and score Predictor.

 

Jonathan Paulson explains Dynamic Programming as given below:

Write down ‘1+1+1+1+1+1+1+1=’ on a sheet of paper.

“What’s that equal to?”

Counting “Eight.”

Write down another +1 on the left.

“What about that?”

“Nine!” “How did you know it was nine so fast?”

“You just added one more!”

“So you didn’t need to recount because you remembered there were eight! Dynamic Programming is just a fancy way to say remembering stuff to save time later.”

 

Dynamic Programming and Recursion

The core idea of Dynamic Programming is to avoid repeated work by remembering partial results which mean dynamic programming is basically. recursion plus common sense. Recursion allows you to express the value of a function in terms of other values of the function. Common sense tells you that if implement your function in a way that the recursive calls are done in advance, and stored for easy access if will make your program faster. This is called Memoization – it is memorizing the results of some specific states, which can then be accessed later, to solve other subproblems.

 

The intuition behind dynamic programming is that we trade space for time, i.e. instead of calculating all the states taking a lot of time but no space, we take up space to store the results of all the subproblems to save time later.

 

Consider the following example of Fibonacci numbers.

Fibonacci (n) = 1; if n = 0

Fibonacci (n) = 1; if n = 1

Fibonacci (n) = Fibonacci (n-1) + Fibonacci (n-2)

A code for it using pure recursion.

int fib (int n)

{

if (n<2)

return 1;

return fib(n-1) + fib(n-2);

}

Using dynamic programming with Memoization.

void fib(int n)

{

fibresult[0] = 1;

fibresult[1] = 1;

for(int i =2; i<n; i++)

fibresult[i] = fibresult[i-1] + fibresult[i-2];

}

The majority of Dynamic Programming problems can be categorized into two types:

  1. Optimization Problems
  2. Combinatorial Problems

The optimization problem expect you to select a feasible solution so that the value of the required function is minimized or maximized. Combinatorial problem expect you to figure out the number of ways to do something or the probability of some event happening.

 

Every dynamic programming problem has a schema to be followed:

  • Show that the problem can be broken down into optimal subproblem.
  • Recursively define the value of the solution by expressing it in terms of optimal subproblems for the smaller subproblems.
  • Compute the value of the optimal solution in bottom-up fashion.
  •  Construct an optimal solution from the computed information.

 

Thus, one can think of dynamic programming as a table filling algorithm: you know the calculations you have to do, so pick the best order to do them in and ignore the ones you don’t have to fill in.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s