mirror of
https://github.com/krahets/hello-algo.git
synced 2025-07-05 05:01:43 +08:00
Translation: Update intro_to_dynamic_programming.md (#1751)
* Update intro_to_dynamic_programming.md * Update intro_to_dynamic_programming.md Made corrections and improvements to the introduction of dynamic programming based on reviewer suggestions. * Update intro_to_dynamic_programming.md Again, I made corrections and improvements to the introduction of dynamic programming based on reviewer suggestions. * Update intro_to_dynamic_programming.md chore: corrected missed feedback/suggestion from review
This commit is contained in:
@ -2,7 +2,7 @@
|
||||
|
||||
<u>Dynamic programming</u> is an important algorithmic paradigm that decomposes a problem into a series of smaller subproblems, and stores the solutions of these subproblems to avoid redundant computations, thereby significantly improving time efficiency.
|
||||
|
||||
In this section, we start with a classic problem, first presenting its brute force backtracking solution, observing the overlapping subproblems contained within, and then gradually deriving a more efficient dynamic programming solution.
|
||||
In this section, we start with a classic problem, first presenting its brute force backtracking solution, identifying the overlapping subproblems, and then gradually deriving a more efficient dynamic programming solution.
|
||||
|
||||
!!! question "Climbing stairs"
|
||||
|
||||
@ -12,7 +12,7 @@ As shown in the figure below, there are $3$ ways to reach the top of a $3$-step
|
||||
|
||||

|
||||
|
||||
The goal of this problem is to determine the number of ways, **considering using backtracking to exhaust all possibilities**. Specifically, imagine climbing stairs as a multi-round choice process: starting from the ground, choosing to go up $1$ or $2$ steps each round, adding one to the count of ways upon reaching the top of the stairs, and pruning the process when exceeding the top. The code is as follows:
|
||||
This problem aims to calculate the number of ways by **using backtracking to exhaust all possibilities**. Specifically, it considers the problem of climbing stairs as a multi-round choice process: starting from the ground, choosing to move up either $1$ or $2$ steps each round, incrementing the count of ways upon reaching the top of the stairs, and pruning the process when it exceeds the top. The code is as follows:
|
||||
|
||||
```src
|
||||
[file]{climbing_stairs_backtrack}-[class]{}-[func]{climbing_stairs_backtrack}
|
||||
@ -20,15 +20,15 @@ The goal of this problem is to determine the number of ways, **considering using
|
||||
|
||||
## Method 1: Brute force search
|
||||
|
||||
Backtracking algorithms do not explicitly decompose the problem but treat solving the problem as a series of decision steps, searching for all possible solutions through exploration and pruning.
|
||||
Backtracking algorithms do not explicitly decompose the problem into subproblems. Instead, they treat the problem as a sequence of decision steps, exploring all possibilities through trial and pruning.
|
||||
|
||||
We can try to analyze this problem from the perspective of decomposition. Let $dp[i]$ be the number of ways to reach the $i^{th}$ step, then $dp[i]$ is the original problem, and its subproblems include:
|
||||
We can analyze this problem using a decomposition approach. Let $dp[i]$ represent the number of ways to reach the $i^{th}$ step. In this case, $dp[i]$ is the original problem, and its subproblems are:
|
||||
|
||||
$$
|
||||
dp[i-1], dp[i-2], \dots, dp[2], dp[1]
|
||||
$$
|
||||
|
||||
Since each round can only advance $1$ or $2$ steps, when we stand on the $i^{th}$ step, the previous round must have been either on the $i-1^{th}$ or the $i-2^{th}$ step. In other words, we can only step from the $i-1^{th}$ or the $i-2^{th}$ step to the $i^{th}$ step.
|
||||
Since each move can only advance $1$ or $2$ steps, when we stand on the $i^{th}$ step, the previous step must have been either on the $i-1^{th}$ or the $i-2^{th}$ step. In other words, we can only reach the $i^{th}$ from the $i-1^{th}$ or $i-2^{th}$ step.
|
||||
|
||||
This leads to an important conclusion: **the number of ways to reach the $i-1^{th}$ step plus the number of ways to reach the $i-2^{th}$ step equals the number of ways to reach the $i^{th}$ step**. The formula is as follows:
|
||||
|
||||
@ -40,7 +40,7 @@ This means that in the stair climbing problem, there is a recursive relationship
|
||||
|
||||

|
||||
|
||||
We can obtain the brute force search solution according to the recursive formula. Starting with $dp[n]$, **recursively decompose a larger problem into the sum of two smaller problems**, until reaching the smallest subproblems $dp[1]$ and $dp[2]$ where the solutions are known, with $dp[1] = 1$ and $dp[2] = 2$, representing $1$ and $2$ ways to climb to the first and second steps, respectively.
|
||||
We can obtain the brute force search solution according to the recursive formula. Starting with $dp[n]$, **we recursively break a larger problem into the sum of two smaller subproblems**, until reaching the smallest subproblems $dp[1]$ and $dp[2]$ where the solutions are known, with $dp[1] = 1$ and $dp[2] = 2$, representing $1$ and $2$ ways to climb to the first and second steps, respectively.
|
||||
|
||||
Observe the following code, which, like standard backtracking code, belongs to depth-first search but is more concise:
|
||||
|
||||
@ -48,11 +48,11 @@ Observe the following code, which, like standard backtracking code, belongs to d
|
||||
[file]{climbing_stairs_dfs}-[class]{}-[func]{climbing_stairs_dfs}
|
||||
```
|
||||
|
||||
The figure below shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
|
||||
The figure below shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. This exponential growth causes the program to run much more slowly when $n$ is large, leading to long wait times.
|
||||
|
||||

|
||||
|
||||
Observing the figure above, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
|
||||
Observing the figure above, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is broken down into $dp[8]$ and $dp[7]$, and $dp[8]$ is further broken into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
|
||||
|
||||
Thus, subproblems include even smaller overlapping subproblems, endlessly. A vast majority of computational resources are wasted on these overlapping subproblems.
|
||||
|
||||
@ -75,11 +75,11 @@ Observe the figure below, **after memoization, all overlapping subproblems need
|
||||
|
||||
## Method 3: Dynamic programming
|
||||
|
||||
**Memoized search is a 'top-down' method**: we start with the original problem (root node), recursively decompose larger subproblems into smaller ones until the solutions to the smallest known subproblems (leaf nodes) are reached. Subsequently, by backtracking, we collect the solutions of the subproblems, constructing the solution to the original problem.
|
||||
**Memoized search is a 'top-down' method**: we start with the original problem (root node), recursively break larger subproblems into smaller ones until the solutions to the smallest known subproblems (leaf nodes) are reached. Subsequently, by backtracking, we collect the solutions of the subproblems, constructing the solution to the original problem.
|
||||
|
||||
On the contrary, **dynamic programming is a 'bottom-up' method**: starting with the solutions to the smallest subproblems, iteratively construct the solutions to larger subproblems until the original problem is solved.
|
||||
On the contrary, **dynamic programming is a 'bottom-up' method**: starting with the solutions to the smallest subproblems, it iteratively constructs the solutions to larger subproblems until the original problem is solved.
|
||||
|
||||
Since dynamic programming does not include a backtracking process, it only requires looping iteration to implement, without needing recursion. In the following code, we initialize an array `dp` to store the solutions to the subproblems, serving the same recording function as the array `mem` in memoized search:
|
||||
Since dynamic programming does not involve backtracking, it only requires iteration using loops and does not need recursion. In the following code, we initialize an array `dp` to store the solutions to subproblems, serving the same recording function as the array `mem` in memoized search:
|
||||
|
||||
```src
|
||||
[file]{climbing_stairs_dp}-[class]{}-[func]{climbing_stairs_dp}
|
||||
@ -107,4 +107,4 @@ Observant readers may have noticed that **since $dp[i]$ is only related to $dp[i
|
||||
|
||||
Observing the above code, since the space occupied by the array `dp` is eliminated, the space complexity is reduced from $O(n)$ to $O(1)$.
|
||||
|
||||
In dynamic programming problems, the current state is often only related to a limited number of previous states, allowing us to retain only the necessary states and save memory space by "dimension reduction". **This space optimization technique is known as 'rolling variable' or 'rolling array'**.
|
||||
In many dynamic programming problems, the current state depends only on a limited number of previous states, allowing us to retain only the necessary states and save memory space by "dimension reduction". **This space optimization technique is known as 'rolling variable' or 'rolling array'**.
|
||||
|
Reference in New Issue
Block a user