Dynamic Programming (DP) is a powerful problem-solving technique used in computer science to solve complex problems by breaking them down into simpler, overlapping subproblems. It is primarily an optimization over plain recursion, where previously computed results are reused to avoid redundant and repeated computations.
Dynamic Programming is commonly used for:
Memoization in a technique in dynamic programming where you solve a subproblem only once and then save the result in a look-up table.
Typically we try to save states in a vector or an array because look-up time is O(1)
. However you can use map
or
unordered_map
to store states too.
// DP solution of Fibonacci using vector:
vector<int> dp;
int fib(int n) {
if (dp[n] != -1) return dp[n];
if (n == 0) return dp[n] = 0;
if (n == 1) return dp[n] = 1;
return dp[n] = fib(n - 1) + fib(n - 2);
}
int main() {
int n = 10;
dp.assign(n + 1, -1);
int result = fib(n);
return 0;
}
Tabulation is exactly the opposite of Top-Down DP. You start with the base case and build up to your answer. Here is an example for calculating Fibonacci numbers using tabulation.
// Fibonacci using Tabulation (Bottom-Up DP)
int fibonacci(int n) {
if (n == 0) return 0;
vector<int> dp(n + 1);
dp[0] = 0;
dp[1] = 1;
for (int i = 2; i <= n; ++i) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
int main() {
int n = 10;
cout << "Fibonacci(" << n << ") = " << fibonacci(n) << endl;
return 0;
}
Aspect | Tabulation (Bottom-Up) | Memoization (Top-Down) |
---|---|---|
Approach | Iterative (Bottom-Up) | Recursive (Top-Down) |
Function Call Overhead | No function calls — faster | Has recursive calls — slower due to overhead |
Risk of Stack Overflow | None — uses loops | Possible for large inputs due to recursion depth |
Space Optimization | Easy to optimize (e.g., O(1) space in Fibonacci) | Harder to optimize — call stack takes extra space |
Execution Order | Subproblems solved in fixed order | Subproblems solved on-demand as needed |
Ease of Implementation | Sometimes more verbose | Often shorter and more intuitive |
Cache Locality | Better (data accessed in sequence) | Poorer (random access via recursion) |
Despite having similar theoretical time and space complexities, tabulation is often preferred—and sometimes even necessary—over memoization due to practical constraints. Memoization can introduce overhead from recursive calls, redundant state evaluations, or deep recursion stacks. For instance, on platforms like CSES, a memoized solution may fail to get accepted due to time limits, while an equivalent tabulated version with the same complexity passes successfully because it avoids recomputation and performs better in practice.
The best way to do DP is understanding the intuition behind it via multiple examples like :
Given a target sum n
, you need to compute the total number of distinct sequences of dice rolls (each roll giving 1 to 6) that sum up to exactly n
.
This is a classic Dynamic Programming problem where the recurrence relation is:
f(n) = f(n-1) + f(n-2) + f(n-3) + f(n-4) + f(n-5) + f(n-6)
With the base case:
f(0) = 1
— there's one way to make a sum of 0 (by doing nothing).
For all negative values of n
, we return 0, since it's not possible to reach negative sums with positive dice rolls.