## Dynamic programming

### Published on 2018-04-29

When looking at various problems, it often happens that a solution depends on values obtained by looking at smaller versions of the same problem. For example, if we have 10 books on a table and we know that the term "hero" is mentioned 1000 times in total (if we count it in all books), and someone decides to bring a new (11th) book and ask us how many times the term "hero" appears in all 11 books we have on the table, then there is no need to examine all 11 books again. Instead, it's enough to count how many times the term "hero" is mentioned in the 11th book, and then add that count to the number 1000 - i.e. how many times "hero" appears in the first 10 books.

In this post we will talk about dynamic programming, which is a technique for solving complex problems by breaking them down, solving the smaller problems, and then combining the results in order to get the solution of the original problem. When using this technique, it's important to store the values obtained when looking at the smaller problems (using arrays, maps, etc.), so that we don't calculate them several times - like we showed in the books example (mentioned above).

As a simple illustration of dynamic programming, we can start with the following comment, originally published on Quora (a very nice website). Let's write 1+1+1+1+1+1+1+1 on a piece of paper, and then ask someone (for example, a child) to calculate the sum. After a while, we will get the correct answer "Eight". Now, let's write "1+" to the left of the expression we already had, and again ask for the result. Clearly, we will immediately get the answer "Nine", because there is no need to re-calculate the stuff we already know. In other words, dynamic programming is just a term that indicates that we will remember certain values in order to save time in the future, instead of calculating them multiple times.

## Smallest number in an array

Assume we have an array of several numbers. Can we write a program that will efficiently answer the question "What is the smallest number of the first X in an array?" (i.e., if we only look at the first X numbers). In other words, if we have an array of 10 integers [15, 14, 12, 13, 9, 7, 13, 13, 14, 100], the smallest number of the first four numbers is 12, the smallest number of the first nine numbers is 7, etc. By solving this simple problem, we will get to know the basic concepts of dynamic programming, and the most common mistakes made when using this technique.

Having in mind the two examples mentioned above (the one with the books, and the one where we added numbers), you probably noticed that if we know the smallest number up to a certain position (for example, the smallest number of the first five listed in the array), then we can easily find the smallest number up to the next position (the smallest number of the first six), as that will either be the sixth number in the array, or the smallest value we already calculated for the first five numbers. The solution to the problem itself is simple - i.e., we only need to repeat the procedure outlined above for all positions. We can use an array to store the partial results. Let's see the solution.

#include <iostream> using namespace std; int main() { int N = 10; int numbers[] = {15, 14, 12, 13, 9, 7, 13, 13, 14, 100}; //dynamic programming int smallest[N]; smallest[0] = numbers[0]; //the first position is easy for(int i=1; i < N; i++) { //the smallest up to position "i" is the smallest up to //the previous position, or the new number at position "i" smallest[i] = min(smallest[i-1], numbers[i]); } //let's print a few values cout << smallest[3] << endl; //12, because 12 is the smallest from the first four (0-3) cout << smallest[8] << endl; //7, because 7 is the smallest from the first nine return 0; }

The code presented above is a classic example of a solution to a dynamic programming problem - source code which is short, and easy to understand. Please, note how we used the fact that a solution to a smaller problem (up to the Xth position), can be applied to the solution of a larger problem (up to the (X+1)-th position). In most cases, we do this by using extra memory - for example, in the solution above we have an array where we store the smallest values up to a certain position. Additionally, similar to recursion, dynamic programming also uses what is known as a "trivial case", which is basically a result we already know. In the problem given above, the trivial case is the smallest value up to the first element. In other words, since we're dealing with just one element, it is clear that this element is also the smallest value up to that position in the array.

In terms of time, the solution given above has linear complexity O(N), because we have one "for" loop that visits all N positions in the array. But, what is also interesting is that once we went through all the elements and populated the extra array with the required values, giving an answer to our original question ("what is the smallest value up to a particular position") happens in constant time O(1). In other words, without making additional calculations, we can immediately give an answer to many similar questions: what is the smallest number from the first 5 elements, what is the smallest number from the first 9 elements, what is the smallest number from all elements, etc. - by simply accessing the corresponding values in the array smallest[].

Finally, let's repeat once again that, when solving problems with the dynamic programming technique, we use the fact that a problem can be solved by solving smaller problems (often similar to the original one). Regarding the terminology used with dynamic programming, the original problem and the smaller versions of that problem are called "states". So, when discussing solutions, we talk about using a state (for example, the smallest number from the first X), to reach another state (the smallest number from the first X + 1).

It is also important to note the most common mistakes that are made when solving tasks with dynamic programming. Specifically, depending on how much additional memory we use or how we use smaller problems to solve the original problem, it is often very difficult to extend or modify a solution. For example, the program given above can't answer questions like "What is the smallest number in an array, if we only look at numbers from the 3rd to the 7th position." On the contrary, the solution we wrote always "looks" at values starting from the first position (and up to a certain position X), and we need to write another solution if we want our program to provide answers for other possible intervals (interestingly, there is a solution using dynamic programming for the second problem as well). Again, please remember that we should always understand the problem that we're solving before we start implementing a solution, because sometimes it is very difficult to make changes to an algorithm so that it works with other problems. The worst thing we can do is waste time by solving a wrong problem - although, at the outset, it might have looked as a good idea.

## Fibonacci numbers

Fibonacci numbers are numbers from the so-called Fibonacci series, and their main characteristic is that each number (except for the first two) is the sum of the two numbers in the series that come before it. The first few Fibonacci numbers are: 1, 1, 2, 3, 5, 8, 13, 21, etc. Note that the first two numbers are equal to 1, and each other number is a sum of the two that come before it (for example, 13 = 5 + 8). If you have heard of recursion, you probably know that we can write functions that call themselves. Let's write a recursive function that calculates the Nth Fibonacci number:

int fibonacci(int N) { if (N == 1 || N == 2) { return 1; } //otherwise, the N-th fibonacci number is //the sum of the previous two numbers return fibonacci(N-1) + fibonacci(N-2); }

The algorithm is simple. The first two Fibonacci numbers (N=1 and N=2) are equal to 1, and for all other numbers (N>2) we use the fact that the N-th Fibonacci number is the sum of the (N-1)th Fibonacci number and the (N-2)th Fibonacci number. This algorithm is correct, but there is still one (very big) problem. Specifically, for larger values of N, it works extremely slowly - mostly because it recalculates values over and over again. Let's see what happens when we call our function for N=6 (used just as an example). Here, the function will call itself with values N=5 and N=4, because:

fibonacci(6) = fibonacci(5) + fibonacci(4)

The function will continue to call itself again and again, recursively. So if we continue to analyze the equation above (using fibonacci(5) = fibonacci(4) + fibonacci(3)), we get the following:

fibonacci(6) = fibonacci(5) + fibonacci(4)

fibonacci(6) = [ fibonacci(4) + fibonacci(3) ] + fibonacci(4)

Can you see the problem we have now? The function will calculate the (N=4)th Fibonacci number twice, without reusing the result. Similarly, it will continue to call itself several times for other values as well, as shown in the following picture (note that the function is called fib, so that the function name does not occupy a lot of space).

Although the picture shows just 15 function calls, the reason for that is because we are asking it to calculate the 6-th Fibonacci number. For larger values of N, some results will be calculated hundreds and even thousands of times, and the function will work extremely slowly. For example, to calculate the 50-th Fibonacci number, the algorithm given above will take several minutes to get to the required result.

One solution to this problem is to use additional memory to store the intermediate results that we have already calculated (basically, the values of the smaller states). In the solution below we will use a map, but it is obvious that we can also use a plain array, for example.

map<int, int> fib; int fibonacci(int N) { if (N == 1 || N == 2) { return 1; } //if the N-th number is already calculated, return it if(fib.count(N) > 0) { return fib[N]; } //otherwise, calculate and save the value int value = fibonacci(N-1) + fibonacci(N-2); fib[N] = value; return value; }

Clearly, this code works much faster than the previous one - since it does not calculate the same states multiple times. If you are interested to learn more, the specific term used to indicate this technique is "memoization" (without r), which specifies that we store the results of certain function calls, and we return that "stored" result when the function is invoked with the same arguments (instead of re-calculating them).

Apart from the algorithms mentioned above, we can also start looking at the problem from the opposite side. Specifically, if we know all the Fibonacci numbers, starting from the first, then the second, the third, and so on until the X-th number, we can also find the (X+1)th Fibonacci number, because it's value is the sum of the previous two numbers. This solution is extremely simple.

int fibonacci(int N) { int fib[1001]; fib[1] = fib[2] = 1; for(int i=3; i <= N; i++) { fib[i] = fib[i-1] + fib[i-2]; } return fib[N]; }

To get to the next Fibonacci number, we do not need all the Fibonacci numbers - i.e. we only need the previous two; so we can stop using an array now, and use just two variables to store the previous two numbers. This is presented in the following code.

int fibonacci(int N) { int oneBefore = 1, twoBefore = 1; int currentFibonacci = 1; for(int i=3; i <= N; i++) { twoBefore = oneBefore; oneBefore = currentFibonacci; currentFibonacci = oneBefore + twoBefore; } return currentFibonacci; }

In this solution, the variable oneBefore stores the value of the previous number, while the variable twoBefore stores the value of the number before the previous one. At each iteration of the loop, we need to update those values, because they are different at each step.

After a long discussion on how to calculate fibonacci numbers, we can finally define the two standard methods of solving problems with the dynamic programming technique:

**Top-down**: looking at larger states by explaining how they can be calculated from smaller ones. For example, in the recursive solution above we indicated that the N-th Fibonacci number is the sum of the (N-1)th and the (N-2)th number. The function will continue to call itself for smaller values, until we reach the basic (trivial) case, where we know that the first two Fibonacci numbers are equal to 1.

**Bottom-up**: starting with smaller states, we make calculations and discover the values of the larger states. In the Fibonacci numbers example, this means that we begin by defining that the first two Fibonacci numbers are equal to 1. Then, we calculate the value of the third number by summing the first two. Similarly, we discover the fourth number, and so on, until we reach the N-th number.

Both methods are frequently used. Depending on the problem, it is sometimes easier to start solving it from the top, and sometimes from the bottom. However, since we often use recursion when working top to bottom, problems can arise if the function needs to call itself many times, because the function calls themselves use computer memory. Therefore, it is recommended to use the bottom-up approach.

Finally, note that fibonacci numbers grow very very fast, so the fact that we used integer variables (int) to store values, means that we can accurately calculate up to the 46-th Fibonacci number - that's because numbers after that one are too big to be stored in a 32-bit variable. For example, the 300-th Fibonacci number is equal to 222232244629420445529739893461909967206666939096499764990979600. We used fibonacci numbers to discuss how dynamic programming can help us solve a variety of problems, and storing extremely large values is a separate discussion not related to dynamic programming. Fibonacci numbers are very simple to understand, and that was the reason I chose them as an example.