Solving Algorithm Problems With Dynamic Programming
Dynamic Programming is used for solving a complex problem by breaking it down into a collection of simpler subproblems solving each of those subproblems just once, and storing the solutions.
A way to determine whether or not we need to use Dynamic programming for a problem is if it has overlapping subproblems. Overlapping subproblems means solving the same subproblem multiple times in order to find the solution. Here’s an example of an overlapping subproblem:
f(4), we need to know what
f(2) are. In order to find
f(3), we need to know what
f(1) are, and to find
f(2) we need to know what
One way to solve problems using Dynamic Programming is recursively. Here’s the fibonacci solution:
Using recursion to solve this problem is not the most efficient. Its time complexity is
O(2^n) which is slower than quadratic runtime!
Ensures that a function doesn’t run for the same inputs more than once by keeping a record of the results for the given inputs. Instead of solving a subproblem multiple times, we can solve it once by storing the cached results.
Another form of Memoization and also known as the Bottom-up approach is where we store the results in an array. You can then iterate over that array to find the result. This leads to better space complexity!
Here are the expected outputs:
It’s always best practice to use different test cases to ensure that the solution is working properly. Please feel free to reach out or leave a comment on this blog if I’m missing anything or if there’s any feedback! For more info on Dynamic Programming make sure to check out the resources section below!