mirror of
https://github.com/krahets/hello-algo.git
synced 2025-11-02 04:31:55 +08:00
Bug fixes and improvements (#1348)
* Add "reference" for EN version. Bug fixes. * Unify the figure reference as "the figure below" and "the figure above". Bug fixes. * Format the EN markdown files. * Replace "" with <u></u> for EN version and bug fixes * Fix biary_tree_dfs.png * Fix biary_tree_dfs.png * Fix zh-hant/biary_tree_dfs.png * Fix heap_sort_step1.png * Sync zh and zh-hant versions. * Bug fixes * Fix EN figures * Bug fixes * Fix the figure labels for EN version
This commit is contained in:
@ -35,7 +35,7 @@ To illustrate the problem-solving steps more vividly, we use a classic problem,
|
||||
|
||||
Given an $n \times m$ two-dimensional grid `grid`, each cell in the grid contains a non-negative integer representing the cost of that cell. The robot starts from the top-left cell and can only move down or right at each step until it reaches the bottom-right cell. Return the minimum path sum from the top-left to the bottom-right.
|
||||
|
||||
The following figure shows an example, where the given grid's minimum path sum is $13$.
|
||||
The figure below shows an example, where the given grid's minimum path sum is $13$.
|
||||
|
||||

|
||||
|
||||
@ -45,7 +45,7 @@ Each round of decisions in this problem is to move one step down or right from t
|
||||
|
||||
The state $[i, j]$ corresponds to the subproblem: the minimum path sum from the starting point $[0, 0]$ to $[i, j]$, denoted as $dp[i, j]$.
|
||||
|
||||
Thus, we obtain the two-dimensional $dp$ matrix shown below, whose size is the same as the input grid $grid$.
|
||||
Thus, we obtain the two-dimensional $dp$ matrix shown in the figure below, whose size is the same as the input grid $grid$.
|
||||
|
||||

|
||||
|
||||
@ -59,7 +59,7 @@ Thus, we obtain the two-dimensional $dp$ matrix shown below, whose size is the s
|
||||
|
||||
For the state $[i, j]$, it can only be derived from the cell above $[i-1, j]$ or the cell to the left $[i, j-1]$. Therefore, the optimal substructure is: the minimum path sum to reach $[i, j]$ is determined by the smaller of the minimum path sums of $[i, j-1]$ and $[i-1, j]$.
|
||||
|
||||
Based on the above analysis, the state transition equation shown in the following figure can be derived:
|
||||
Based on the above analysis, the state transition equation shown in the figure below can be derived:
|
||||
|
||||
$$
|
||||
dp[i, j] = \min(dp[i-1, j], dp[i, j-1]) + grid[i, j]
|
||||
@ -104,7 +104,7 @@ Implementation code as follows:
|
||||
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dfs}
|
||||
```
|
||||
|
||||
The following figure shows the recursive tree rooted at $dp[2, 1]$, which includes some overlapping subproblems, the number of which increases sharply as the size of the grid `grid` increases.
|
||||
The figure below shows the recursive tree rooted at $dp[2, 1]$, which includes some overlapping subproblems, the number of which increases sharply as the size of the grid `grid` increases.
|
||||
|
||||
Essentially, the reason for overlapping subproblems is: **there are multiple paths to reach a certain cell from the top-left corner**.
|
||||
|
||||
@ -132,7 +132,7 @@ Implement the dynamic programming solution iteratively, code as shown below:
|
||||
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dp}
|
||||
```
|
||||
|
||||
The following figures show the state transition process of the minimum path sum, traversing the entire grid, **thus the time complexity is $O(nm)$**.
|
||||
The figure below show the state transition process of the minimum path sum, traversing the entire grid, **thus the time complexity is $O(nm)$**.
|
||||
|
||||
The array `dp` is of size $n \times m$, **therefore the space complexity is $O(nm)$**.
|
||||
|
||||
|
||||
@ -39,7 +39,7 @@ From this, we obtain a two-dimensional $dp$ table of size $(i+1) \times (j+1)$.
|
||||
|
||||
**Step two: Identify the optimal substructure and then derive the state transition equation**
|
||||
|
||||
Consider the subproblem $dp[i, j]$, whose corresponding tail characters of the two strings are $s[i-1]$ and $t[j-1]$, which can be divided into three scenarios as shown below.
|
||||
Consider the subproblem $dp[i, j]$, whose corresponding tail characters of the two strings are $s[i-1]$ and $t[j-1]$, which can be divided into three scenarios as shown in the figure below.
|
||||
|
||||
1. Add $t[j-1]$ after $s[i-1]$, then the remaining subproblem is $dp[i, j-1]$.
|
||||
2. Delete $s[i-1]$, then the remaining subproblem is $dp[i-1, j]$.
|
||||
@ -71,7 +71,7 @@ Observing the state transition equation, solving $dp[i, j]$ depends on the solut
|
||||
[file]{edit_distance}-[class]{}-[func]{edit_distance_dp}
|
||||
```
|
||||
|
||||
As shown below, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.
|
||||
As shown in the figure below, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
|
||||
@ -36,7 +36,7 @@ $$
|
||||
dp[i] = dp[i-1] + dp[i-2]
|
||||
$$
|
||||
|
||||
This means that in the stair climbing problem, there is a recursive relationship between the subproblems, **the solution to the original problem can be constructed from the solutions to the subproblems**. The following image shows this recursive relationship.
|
||||
This means that in the stair climbing problem, there is a recursive relationship between the subproblems, **the solution to the original problem can be constructed from the solutions to the subproblems**. The figure below shows this recursive relationship.
|
||||
|
||||

|
||||
|
||||
@ -48,11 +48,11 @@ Observe the following code, which, like standard backtracking code, belongs to d
|
||||
[file]{climbing_stairs_dfs}-[class]{}-[func]{climbing_stairs_dfs}
|
||||
```
|
||||
|
||||
The following image shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
|
||||
The figure below shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
|
||||
|
||||

|
||||
|
||||
Observing the above image, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
|
||||
Observing the figure above, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
|
||||
|
||||
Thus, subproblems include even smaller overlapping subproblems, endlessly. A vast majority of computational resources are wasted on these overlapping subproblems.
|
||||
|
||||
@ -69,7 +69,7 @@ The code is as follows:
|
||||
[file]{climbing_stairs_dfs_mem}-[class]{}-[func]{climbing_stairs_dfs_mem}
|
||||
```
|
||||
|
||||
Observe the following image, **after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to $O(n)$**, which is a significant leap.
|
||||
Observe the figure below, **after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to $O(n)$**, which is a significant leap.
|
||||
|
||||

|
||||
|
||||
@ -85,7 +85,7 @@ Since dynamic programming does not include a backtracking process, it only requi
|
||||
[file]{climbing_stairs_dp}-[class]{}-[func]{climbing_stairs_dp}
|
||||
```
|
||||
|
||||
The image below simulates the execution process of the above code.
|
||||
The figure below simulates the execution process of the above code.
|
||||
|
||||

|
||||
|
||||
|
||||
@ -8,7 +8,7 @@ In this section, we will first solve the most common 0-1 knapsack problem.
|
||||
|
||||
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once. What is the maximum value of items that can be placed in the knapsack under the capacity limit?
|
||||
|
||||
Observe the following figure, since the item number $i$ starts counting from 1, and the array index starts from 0, thus the weight of item $i$ corresponds to $wgt[i-1]$ and the value corresponds to $val[i-1]$.
|
||||
Observe the figure below, since the item number $i$ starts counting from 1, and the array index starts from 0, thus the weight of item $i$ corresponds to $wgt[i-1]$ and the value corresponds to $val[i-1]$.
|
||||
|
||||

|
||||
|
||||
@ -76,19 +76,19 @@ After introducing memoization, **the time complexity depends on the number of su
|
||||
[file]{knapsack}-[class]{}-[func]{knapsack_dfs_mem}
|
||||
```
|
||||
|
||||
The following figure shows the search branches that are pruned in memoized search.
|
||||
The figure below shows the search branches that are pruned in memoized search.
|
||||
|
||||

|
||||
|
||||
### Method three: Dynamic programming
|
||||
|
||||
Dynamic programming essentially involves filling the $dp$ table during the state transition, the code is shown below:
|
||||
Dynamic programming essentially involves filling the $dp$ table during the state transition, the code is shown in the figure below:
|
||||
|
||||
```src
|
||||
[file]{knapsack}-[class]{}-[func]{knapsack_dp}
|
||||
```
|
||||
|
||||
As shown in the figures below, both the time complexity and space complexity are determined by the size of the array `dp`, i.e., $O(n \times cap)$.
|
||||
As shown in the figure below, both the time complexity and space complexity are determined by the size of the array `dp`, i.e., $O(n \times cap)$.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
|
||||
@ -40,7 +40,7 @@ Comparing the code for the two problems, the state transition changes from $i-1$
|
||||
|
||||
Since the current state comes from the state to the left and above, **the space-optimized solution should perform a forward traversal for each row in the $dp$ table**.
|
||||
|
||||
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the following figures to understand the difference.
|
||||
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the figure below to understand the difference.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
@ -117,7 +117,7 @@ For this reason, we use the number $amt + 1$ to represent an invalid solution, b
|
||||
[file]{coin_change}-[class]{}-[func]{coin_change_dp}
|
||||
```
|
||||
|
||||
The following images show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.
|
||||
The figure below show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
@ -176,7 +176,7 @@ The space optimization for the coin change problem is handled in the same way as
|
||||
|
||||
!!! question
|
||||
|
||||
Given $n$ types of coins, where the denomination of the $i^{th}$ type of coin is $coins[i - 1]$, and the target amount is $amt$. **Each type of coin can be selected multiple times**, **ask how many combinations of coins can make up the target amount**. See the example below.
|
||||
Given $n$ types of coins, where the denomination of the $i^{th}$ type of coin is $coins[i - 1]$, and the target amount is $amt$. Each type of coin can be selected multiple times, **ask how many combinations of coins can make up the target amount**. See the example below.
|
||||
|
||||

|
||||
|
||||
|
||||
Reference in New Issue
Block a user