This commit is contained in:
krahets
2025-04-27 16:54:32 +08:00
parent cb10da3b35
commit f31d65d339
8 changed files with 88 additions and 88 deletions

View File

@ -2388,7 +2388,7 @@ comments: true
## 9.2.3   效率对比
设图中共有 $n$ 个顶点和 $m$ 条边,表 9-2 对比了邻接矩阵和邻接表的时间效率和空间效率。
设图中共有 $n$ 个顶点和 $m$ 条边,表 9-2 对比了邻接矩阵和邻接表的时间效率和空间效率。请注意,邻接表(链表)对应本文实现,而邻接表(哈希表)专指将所有链表替换为哈希表后的实现。
<p align="center"> 表 9-2 &nbsp; 邻接矩阵与邻接表对比 </p>
@ -2396,9 +2396,9 @@ comments: true
| | 邻接矩阵 | 邻接表(链表) | 邻接表(哈希表) |
| ------------ | -------- | -------------- | ---------------- |
| 判断是否邻接 | $O(1)$ | $O(m)$ | $O(1)$ |
| 判断是否邻接 | $O(1)$ | $O(n)$ | $O(1)$ |
| 添加边 | $O(1)$ | $O(1)$ | $O(1)$ |
| 删除边 | $O(1)$ | $O(m)$ | $O(1)$ |
| 删除边 | $O(1)$ | $O(n)$ | $O(1)$ |
| 添加顶点 | $O(n)$ | $O(1)$ | $O(1)$ |
| 删除顶点 | $O(n^2)$ | $O(n + m)$ | $O(n)$ |
| 内存空间占用 | $O(n^2)$ | $O(n + m)$ | $O(n + m)$ |

View File

@ -669,7 +669,7 @@ comments: true
### 2. &nbsp; 完全二叉树
如图 7-5 所示,<u>完全二叉树complete binary tree</u>只有最底层的节点未被填满,且最底层节点尽量靠左填充。请注意,完美二叉树也是一棵完全二叉树。
如图 7-5 所示,<u>完全二叉树complete binary tree</u>仅允许最底层的节点不完全填满,且最底层节点必须从左至右依次连续填充。请注意,完美二叉树也是一棵完全二叉树。
![完全二叉树](binary_tree.assets/complete_binary_tree.png){ class="animation-figure" }

View File

@ -4,15 +4,15 @@ comments: true
# 13.1 &nbsp; Backtracking algorithms
<u>Backtracking algorithm</u> is a method to solve problems by exhaustive search, where the core idea is to start from an initial state and brute force all possible solutions, recording the correct ones until a solution is found or all possible choices are exhausted without finding a solution.
<u>Backtracking algorithm</u> is a method to solve problems by exhaustive search. Its core concept is to start from an initial state and brutally search for all possible solutions. The algorithm records the correct ones until a solution is found or all possible solutions have been tried but no solution can be found.
Backtracking typically employs "depth-first search" to traverse the solution space. In the "Binary Tree" chapter, we mentioned that pre-order, in-order, and post-order traversals are all depth-first searches. Next, we use pre-order traversal to construct a backtracking problem to gradually understand the workings of the backtracking algorithm.
Backtracking typically employs "depth-first search" to traverse the solution space. In the "Binary tree" chapter, we mentioned that pre-order, in-order, and post-order traversals are all depth-first searches. Next, we are going to use pre-order traversal to solve a backtracking problem. This helps us to understand how the algorithm works gradually.
!!! question "Example One"
Given a binary tree, search and record all nodes with a value of $7$, please return a list of nodes.
Given a binary tree, search and record all nodes with a value of $7$ and return them in a list.
For this problem, we traverse this tree in pre-order and check if the current node's value is $7$. If it is, we add the node's value to the result list `res`. The relevant process is shown in Figure 13-1:
To solve this problem, we traverse this tree in pre-order and check if the current node's value is $7$. If it is, we add the node's value to the result list `res`. The process is shown in Figure 13-1:
=== "Python"
@ -132,19 +132,19 @@ For this problem, we traverse this tree in pre-order and check if the current no
<p align="center"> Figure 13-1 &nbsp; Searching nodes in pre-order traversal </p>
## 13.1.1 &nbsp; Trying and retreating
## 13.1.1 &nbsp; Trial and retreat
**The reason it is called backtracking is that the algorithm uses a "try" and "retreat" strategy when searching the solution space**. When the algorithm encounters a state where it can no longer progress or fails to achieve a satisfying solution, it undoes the previous choice, reverts to the previous state, and tries other possible choices.
**It is called a backtracking algorithm because it uses a "trial" and "retreat" strategy when searching the solution space**. During the search, whenever it encounters a state where it can no longer proceed to obtain a satisfying solution, it undoes the previous choice and reverts to the previous state so that other possible choices can be chosen for the next attempt.
For Example One, visiting each node represents a "try", and passing a leaf node or returning to the parent node's `return` represents "retreat".
In Example One, visiting each node starts a "trial". And passing a leaf node or the `return` statement to going back to the parent node suggests "retreat".
It's worth noting that **retreat is not merely about function returns**. We expand slightly on Example One for clarification.
It's worth noting that **retreat is not merely about function returns**. We'll expand slightly on Example One question to explain what it means.
!!! question "Example Two"
In a binary tree, search for all nodes with a value of $7$ and **please return the paths from the root node to these nodes**.
In a binary tree, search for all nodes with a value of $7$ and for all matching nodes, **please return the paths from the root node to that node**.
Based on the code from Example One, we need to use a list `path` to record the visited node paths. When a node with a value of $7$ is reached, we copy `path` and add it to the result list `res`. After the traversal, `res` holds all the solutions. The code is as shown:
Based on the code from Example One, we need to use a list called `path` to record the visited node paths. When a node with a value of $7$ is reached, we copy `path` and add it to the result list `res`. After the traversal, `res` holds all the solutions. The code is as shown:
=== "Python"
@ -272,9 +272,9 @@ Based on the code from Example One, we need to use a list `path` to record the v
[class]{}-[func]{preOrder}
```
In each "try", we record the path by adding the current node to `path`; before "retreating", we need to pop the node from `path` **to restore the state before this attempt**.
In each "trial", we record the path by adding the current node to `path`. Whenever we need to "retreat", we pop the node from `path` **to restore the state prior to this failed attempt**.
Observe the process shown in Figure 13-2, **we can understand trying and retreating as "advancing" and "undoing"**, two operations that are reverse to each other.
By observing the process shown in Figure 13-2, **the trial is like "advancing", and retreat is like "undoing"**. The later pairs can be seen as a reverse operation to their counterpart.
=== "<1>"
![Trying and retreating](backtracking_algorithm.assets/preorder_find_paths_step1.png){ class="animation-figure" }
@ -311,15 +311,15 @@ Observe the process shown in Figure 13-2, **we can understand trying and retreat
<p align="center"> Figure 13-2 &nbsp; Trying and retreating </p>
## 13.1.2 &nbsp; Pruning
## 13.1.2 &nbsp; Prune
Complex backtracking problems usually involve one or more constraints, **which are often used for "pruning"**.
!!! question "Example Three"
In a binary tree, search for all nodes with a value of $7$ and return the paths from the root to these nodes, **requiring that the paths do not contain nodes with a value of $3$**.
In a binary tree, search for all nodes with a value of $7$ and return the paths from the root to these nodes, **with restriction that the paths do not contain nodes with a value of $3$**.
To meet the above constraints, **we need to add a pruning operation**: during the search process, if a node with a value of $3$ is encountered, it returns early, discontinuing further search. The code is as shown:
To meet the above constraints, **we need to add a pruning operation**: during the search process, if a node with a value of $3$ is encountered, it aborts further searching down through the path immediately. The code is as shown:
=== "Python"
@ -450,7 +450,7 @@ To meet the above constraints, **we need to add a pruning operation**: during th
[class]{}-[func]{preOrder}
```
"Pruning" is a very vivid noun. As shown in Figure 13-3, in the search process, **we "cut off" the search branches that do not meet the constraints**, avoiding many meaningless attempts, thus enhancing the search efficiency.
"Pruning" is a very vivid noun. As shown in Figure 13-3, in the search process, **we "cut off" the search branches that do not meet the constraints**. It avoids further unnecessary attempts, thus enhances the search efficiency.
![Pruning based on constraints](backtracking_algorithm.assets/preorder_find_constrained_paths.png){ class="animation-figure" }
@ -458,7 +458,7 @@ To meet the above constraints, **we need to add a pruning operation**: during th
## 13.1.3 &nbsp; Framework code
Next, we attempt to distill the main framework of "trying, retreating, and pruning" from backtracking to enhance the code's universality.
Now, let's try to distill the main framework of "trial, retreat, and prune" from backtracking to enhance the code's universality.
In the following framework code, `state` represents the current state of the problem, `choices` represents the choices available under the current state:
@ -475,9 +475,9 @@ In the following framework code, `state` represents the current state of the pro
return
# Iterate through all choices
for choice in choices:
# Pruning: check if the choice is valid
# Prune: check if the choice is valid
if is_valid(state, choice):
# Try: make a choice, update the state
# Trial: make a choice, update the state
make_choice(state, choice)
backtrack(state, choices, res)
# Retreat: undo the choice, revert to the previous state
@ -498,9 +498,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (Choice choice : choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -524,9 +524,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (Choice choice : choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -550,9 +550,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
foreach (Choice choice in choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (IsValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
MakeChoice(state, choice);
Backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -576,9 +576,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for _, choice := range choices {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if isValid(state, choice) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice)
backtrack(state, choices, res)
// Retreat: undo the choice, revert to the previous state
@ -602,9 +602,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for choice in choices {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if isValid(state: state, choice: choice) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state: &state, choice: choice)
backtrack(state: &state, choices: choices, res: &res)
// Retreat: undo the choice, revert to the previous state
@ -628,9 +628,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (let choice of choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -654,9 +654,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (let choice of choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -680,9 +680,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (Choice choice in choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -706,9 +706,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for choice in choices {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if is_valid(state, choice) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
make_choice(state, choice);
backtrack(state, choices, res);
// Retreat: undo the choice, revert to the previous state
@ -732,9 +732,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (int i = 0; i < numChoices; i++) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, &choices[i])) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, &choices[i]);
backtrack(state, choices, numChoices, res, numRes);
// Retreat: undo the choice, revert to the previous state
@ -758,9 +758,9 @@ In the following framework code, `state` represents the current state of the pro
}
// Iterate through all choices
for (choice in choices) {
// Pruning: check if the choice is valid
// Prune: check if the choice is valid
if (isValid(state, choice)) {
// Try: make a choice, update the state
// Trial: make a choice, update the state
makeChoice(state, choice)
backtrack(state, choices, res)
// Retreat: undo the choice, revert to the previous state
@ -782,7 +782,7 @@ In the following framework code, `state` represents the current state of the pro
```
Next, we solve Example Three based on the framework code. The `state` is the node traversal path, `choices` are the current node's left and right children, and the result `res` is the list of paths:
Now, we are able to solve Example Three using the framework code. The `state` is the node traversal path, `choices` are the current node's left and right children, and the result `res` is the list of paths:
=== "Python"
@ -1104,13 +1104,13 @@ Next, we solve Example Three based on the framework code. The `state` is the nod
[class]{}-[func]{backtrack}
```
As per the requirements, after finding a node with a value of $7$, the search should continue, **thus the `return` statement after recording the solution should be removed**. Figure 13-4 compares the search processes with and without retaining the `return` statement.
As per the requirements, after finding a node with a value of $7$, the search should continue. **As a result, the `return` statement after recording the solution should be removed**. Figure 13-4 compares the search processes with and without retaining the `return` statement.
![Comparison of retaining and removing the return in the search process](backtracking_algorithm.assets/backtrack_remove_return_or_not.png){ class="animation-figure" }
<p align="center"> Figure 13-4 &nbsp; Comparison of retaining and removing the return in the search process </p>
Compared to the implementation based on pre-order traversal, the code implementation based on the backtracking algorithm framework seems verbose, but it has better universality. In fact, **many backtracking problems can be solved within this framework**. We just need to define `state` and `choices` according to the specific problem and implement the methods in the framework.
Compared to the implementation based on pre-order traversal, the code using the backtracking algorithm framework seems verbose. However, it has better universality. In fact, **many backtracking problems can be solved within this framework**. We just need to define `state` and `choices` according to the specific problem and implement the methods in the framework.
## 13.1.4 &nbsp; Common terminology
@ -1122,12 +1122,12 @@ To analyze algorithmic problems more clearly, we summarize the meanings of commo
| Term | Definition | Example Three |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| Solution (solution) | A solution is an answer that satisfies specific conditions of the problem, which may have one or more | All paths from the root node to node $7$ that meet the constraint |
| Constraint (constraint) | Constraints are conditions in the problem that limit the feasibility of solutions, often used for pruning | Paths do not contain node $3$ |
| State (state) | State represents the situation of the problem at a certain moment, including choices made | Current visited node path, i.e., `path` node list |
| Attempt (attempt) | An attempt is the process of exploring the solution space based on available choices, including making choices, updating the state, and checking if it's a solution | Recursively visiting left (right) child nodes, adding nodes to `path`, checking if the node's value is $7$ |
| Backtracking (backtracking) | Backtracking refers to the action of undoing previous choices and returning to the previous state when encountering states that do not meet the constraints | When passing leaf nodes, ending node visits, encountering nodes with a value of $3$, terminating the search, and function return |
| Pruning (pruning) | Pruning is a method to avoid meaningless search paths based on the characteristics and constraints of the problem, which can enhance search efficiency | When encountering a node with a value of $3$, no further search is continued |
| Solution | A solution is an answer that satisfies specific conditions of the problem, which may have one or more | All paths from the root node to node $7$ that meet the constraint |
| Constraint | Constraints are conditions in the problem that limit the feasibility of solutions, often used for pruning | Paths do not contain node $3$ |
| State | State represents the situation of the problem at a certain moment, including choices made | Current visited node path, i.e., `path` node list |
| Trial | A trial is the process of exploring the solution space based on available choices, including making choices, updating the state, and checking if it's a solution | Recursively visiting left (right) child nodes, adding nodes to `path`, checking if the node's value is $7$ |
| Retreat | Retreat refers to the action of undoing previous choices and returning to the previous state when encountering states that do not meet the constraints | When passing leaf nodes, ending node visits, encountering nodes with a value of $3$, terminating the search, and the recursion function returns |
| Prune | Prune is a method to avoid meaningless search paths based on the characteristics and constraints of the problem, which can enhance search efficiency | When encountering a node with a value of $3$, no further search is required |
</div>
@ -1139,14 +1139,14 @@ To analyze algorithmic problems more clearly, we summarize the meanings of commo
The backtracking algorithm is essentially a depth-first search algorithm that attempts all possible solutions until a satisfying solution is found. The advantage of this method is that it can find all possible solutions, and with reasonable pruning operations, it can be highly efficient.
However, when dealing with large-scale or complex problems, **the operational efficiency of backtracking may be difficult to accept**.
However, when dealing with large-scale or complex problems, **the running efficiency of backtracking algorithm may not be acceptable**.
- **Time**: Backtracking algorithms usually need to traverse all possible states in the state space, which can reach exponential or factorial time complexity.
- **Space**: In recursive calls, it is necessary to save the current state (such as paths, auxiliary variables for pruning, etc.). When the depth is very large, the space requirement may become significant.
- **Time complexity**: Backtracking algorithms usually need to traverse all possible states in the state space, which can reach exponential or factorial time complexity.
- **Space complexity**: In recursive calls, it is necessary to save the current state (such as paths, auxiliary variables for pruning, etc.). When the depth is very large, the space need may become significantly bigger.
Even so, **backtracking remains the best solution for certain search problems and constraint satisfaction problems**. For these problems, since it is unpredictable which choices can generate valid solutions, we must traverse all possible choices. In this case, **the key is how to optimize efficiency**, with common efficiency optimization methods being two types.
Even so, **backtracking remains the best solution for certain search problems and constraint satisfaction problems**. For these problems, there is no way to predict which choices can generate valid solutions. We have to traverse all possible choices. In this case, **the key is about how to optimize the efficiency**. There are two common efficiency optimization methods.
- **Pruning**: Avoid searching paths that definitely will not produce a solution, thus saving time and space.
- **Prune**: Avoid searching paths that definitely will not produce a solution, thus saving time and space.
- **Heuristic search**: Introduce some strategies or estimates during the search process to prioritize the paths that are most likely to produce valid solutions.
## 13.1.6 &nbsp; Typical backtracking problems

View File

@ -6,15 +6,15 @@ comments: true
!!! question
According to the rules of chess, a queen can attack pieces in the same row, column, or on a diagonal line. Given $n$ queens and an $n \times n$ chessboard, find arrangements where no two queens can attack each other.
According to the rules of chess, a queen can attack pieces in the same row, column, or diagonal line. Given $n$ queens and an $n \times n$ chessboard, find arrangements where no two queens can attack each other.
As shown in Figure 13-15, when $n = 4$, there are two solutions. From the perspective of the backtracking algorithm, an $n \times n$ chessboard has $n^2$ squares, presenting all possible choices `choices`. The state of the chessboard `state` changes continuously as each queen is placed.
As shown in Figure 13-15, there are two solutions when $n = 4$. From the perspective of the backtracking algorithm, an $n \times n$ chessboard has $n^2$ squares, presenting all possible choices `choices`. The state of the chessboard `state` changes continuously as each queen is placed.
![Solution to the 4 queens problem](n_queens_problem.assets/solution_4_queens.png){ class="animation-figure" }
<p align="center"> Figure 13-15 &nbsp; Solution to the 4 queens problem </p>
Figure 13-16 shows the three constraints of this problem: **multiple queens cannot be on the same row, column, or diagonal**. It is important to note that diagonals are divided into the main diagonal `\` and the secondary diagonal `/`.
Figure 13-16 shows the three constraints of this problem: **multiple queens cannot occupy the same row, column, or diagonal**. It is important to note that diagonals are divided into the main diagonal `\` and the secondary diagonal `/`.
![Constraints of the n queens problem](n_queens_problem.assets/n_queens_constraints.png){ class="animation-figure" }
@ -22,7 +22,7 @@ Figure 13-16 shows the three constraints of this problem: **multiple queens cann
### 1. &nbsp; Row-by-row placing strategy
As the number of queens equals the number of rows on the chessboard, both being $n$, it is easy to conclude: **each row on the chessboard allows and only allows one queen to be placed**.
As the number of queens equals the number of rows on the chessboard, both being $n$, it is easy to conclude that **each row on the chessboard allows and only allows one queen to be placed**.
This means that we can adopt a row-by-row placing strategy: starting from the first row, place one queen per row until the last row is reached.
@ -32,7 +32,7 @@ Figure 13-17 shows the row-by-row placing process for the 4 queens problem. Due
<p align="center"> Figure 13-17 &nbsp; Row-by-row placing strategy </p>
Essentially, **the row-by-row placing strategy serves as a pruning function**, avoiding all search branches that would place multiple queens in the same row.
Essentially, **the row-by-row placing strategy serves as a pruning function**, eliminating all search branches that would place multiple queens in the same row.
### 2. &nbsp; Column and diagonal pruning
@ -40,13 +40,13 @@ To satisfy column constraints, we can use a boolean array `cols` of length $n$ t
!!! tip
Note that the origin of the chessboard is located in the upper left corner, where the row index increases from top to bottom, and the column index increases from left to right.
Note that the origin of the matrix is located in the upper left corner, where the row index increases from top to bottom, and the column index increases from left to right.
How about the diagonal constraints? Let the row and column indices of a cell on the chessboard be $(row, col)$. By selecting a specific main diagonal, we notice that the difference $row - col$ is the same for all cells on that diagonal, **meaning that $row - col$ is a constant value on that diagonal**.
How about the diagonal constraints? Let the row and column indices of a certain cell on the chessboard be $(row, col)$. By selecting a specific main diagonal, we notice that the difference $row - col$ is the same for all cells on that diagonal, **meaning that $row - col$ is a constant value on the main diagonal**.
Thus, if two cells satisfy $row_1 - col_1 = row_2 - col_2$, they are definitely on the same main diagonal. Using this pattern, we can utilize the array `diags1` shown in Figure 13-18 to track whether a queen is on any main diagonal.
In other words, if two cells satisfy $row_1 - col_1 = row_2 - col_2$, they are definitely on the same main diagonal. Using this pattern, we can utilize the array `diags1` shown in Figure 13-18 to track whether a queen is on any main diagonal.
Similarly, **the sum $row + col$ is a constant value for all cells on a secondary diagonal**. We can also use the array `diags2` to handle secondary diagonal constraints.
Similarly, **the sum of $row + col$ is a constant value for all cells on the secondary diagonal**. We can also use the array `diags2` to handle secondary diagonal constraints.
![Handling column and diagonal constraints](n_queens_problem.assets/n_queens_cols_diagonals.png){ class="animation-figure" }
@ -54,7 +54,7 @@ Similarly, **the sum $row + col$ is a constant value for all cells on a secondar
### 3. &nbsp; Code implementation
Please note, in an $n$-dimensional matrix, the range of $row - col$ is $[-n + 1, n - 1]$, and the range of $row + col$ is $[0, 2n - 2]$, thus the number of both main and secondary diagonals is $2n - 1$, meaning the length of both arrays `diags1` and `diags2` is $2n - 1$.
Please note, in an $n$-dimensional square matrix, the range of $row - col$ is $[-n + 1, n - 1]$, and the range of $row + col$ is $[0, 2n - 2]$. Consequently, the number of both main and secondary diagonals is $2n - 1$, meaning the length of the arrays `diags1` and `diags2` is $2n - 1$.
=== "Python"
@ -291,6 +291,6 @@ Please note, in an $n$-dimensional matrix, the range of $row - col$ is $[-n + 1,
[class]{}-[func]{nQueens}
```
Placing $n$ queens row-by-row, considering column constraints, from the first row to the last row there are $n$, $n-1$, $\dots$, $2$, $1$ choices, using $O(n!)$ time. When recording a solution, it is necessary to copy the matrix `state` and add it to `res`, with the copying operation using $O(n^2)$ time. Therefore, **the overall time complexity is $O(n! \cdot n^2)$**. In practice, pruning based on diagonal constraints can significantly reduce the search space, thus often the search efficiency is better than the above time complexity.
Placing $n$ queens row-by-row, considering column constraints, from the first row to the last row, there are $n$, $n-1$, $\dots$, $2$, $1$ choices, using $O(n!)$ time. When recording a solution, it is necessary to copy the matrix `state` and add it to `res`, with the copying operation using $O(n^2)$ time. Therefore, **the overall time complexity is $O(n! \cdot n^2)$**. In practice, pruning based on diagonal constraints can significantly reduce the search space, thus often the search efficiency is better than the aforementioned time complexity.
Array `state` uses $O(n^2)$ space, and arrays `cols`, `diags1`, and `diags2` each use $O(n)$ space. The maximum recursion depth is $n$, using $O(n)$ stack space. Therefore, **the space complexity is $O(n^2)$**.
Array `state` uses $O(n^2)$ space, and arrays `cols`, `diags1`, and `diags2` each use $O(n)$ space as well. The maximum recursion depth is $n$, using $O(n)$ stack frame space. Therefore, **the space complexity is $O(n^2)$**.

View File

@ -6,22 +6,22 @@ comments: true
### 1. &nbsp; Key review
- The essence of the backtracking algorithm is an exhaustive search method, where the solution space is traversed deeply first to find solutions that meet the criteria. During the search, if a satisfying solution is found, it is recorded, until all solutions are found or the search is completed.
- The search process of the backtracking algorithm includes trying and retreating. It uses depth-first search to explore various choices, and when a choice does not meet the constraint conditions, the previous choice is undone, reverting to the previous state, and other options are then continued to be tried. Trying and retreating are operations in opposite directions.
- Backtracking problems usually contain multiple constraints, which can be used to perform pruning operations. Pruning can terminate unnecessary search branches early, greatly enhancing search efficiency.
- Backtracking algorithms are mainly used to solve search problems and constraint satisfaction problems. Although combinatorial optimization problems can be solved using backtracking, there are often more efficient or effective solutions available.
- The permutation problem aims to search for all possible permutations of a given set of elements. We use an array to record whether each element has been chosen, cutting off branches that repeatedly select the same element, ensuring each element is selected only once.
- The essence of the backtracking algorithm is exhaustive search. It seeks solutions that meet the conditions by performing a depth-first traversal of the solution space. During the search, if a satisfying solution is found, it is recorded, until all solutions are found or the traversal is completed.
- The search process of the backtracking algorithm includes trying and backtracking. It uses depth-first search to explore various choices, and when a choice does not meet the constraints, the previous choice is undone. Then it reverts to the previous state and continues to try other options. Trying and backtracking are operations in opposite directions.
- Backtracking problems usually contain multiple constraints. These constraints can be used to perform pruning operations. Pruning can terminate unnecessary search branches in advance, greatly enhancing search efficiency.
- The backtracking algorithm is mainly used to solve search problems and constraint satisfaction problems. Although combinatorial optimization problems can be solved using backtracking, there are often more efficient or effective solutions available.
- The permutation problem aims to search for all possible permutations of the elements in a given set. We use an array to record whether each element has been chosen, avoiding repeated selection of the same element. This ensures that each element is chosen only once.
- In permutation problems, if the set contains duplicate elements, the final result will include duplicate permutations. We need to restrict that identical elements can only be selected once in each round, which is usually implemented using a hash set.
- The subset-sum problem aims to find all subsets in a given set that sum to a target value. The set does not distinguish the order of elements, but the search process outputs all ordered results, producing duplicate subsets. Before backtracking, we sort the data and set a variable to indicate the starting point of each round of traversal, thereby pruning the search branches that generate duplicate subsets.
- For the subset-sum problem, equal elements in the array can produce duplicate sets. Using the precondition that the array is already sorted, we prune by determining if adjacent elements are equal, thus ensuring equal elements are only selected once per round.
- The $n$ queens problem aims to find schemes to place $n$ queens on an $n \times n$ size chessboard in such a way that no two queens can attack each other. The constraints of the problem include row constraints, column constraints, main diagonal constraints, and secondary diagonal constraints. To meet the row constraint, we adopt a strategy of placing one queen per row, ensuring each row has one queen placed.
- The handling of column constraints and diagonal constraints is similar. For column constraints, we use an array to record whether there is a queen in each column, thereby indicating whether the selected cell is legal. For diagonal constraints, we use two arrays to respectively record the presence of queens on the main and secondary diagonals; the challenge is in identifying the row and column index patterns that satisfy the same primary (secondary) diagonal.
- The subset-sum problem aims to find all subsets in a given set that sum to a target value. The set does not distinguish the order of elements, but the search process may generate duplicate subsets. This occurs because the algorithm explores different element orders as unique paths. Before backtracking, we sort the data and set a variable to indicate the starting point of the traversal for each round. This allows us to prune the search branches that generate duplicate subsets.
- For the subset-sum problem, equal elements in the array can produce duplicate sets. Using the precondition that the array is already sorted, we prune by determining if adjacent elements are equal. This ensures that equal elements are only selected once per round.
- The $n$ queens problem aims to find schemes to place $n$ queens on an $n \times n$ chessboard such that no two queens can attack each other. The constraints of the problem include row constraints, column constraints, and constraints on the main and secondary diagonals. To meet the row constraint, we adopt a strategy of placing one queen per row, ensuring each row has one queen placed.
- The handling of column constraints and diagonal constraints is similar. For column constraints, we use an array to record whether there is a queen in each column, thereby indicating whether the selected cell is legal. For diagonal constraints, we use two arrays to respectively record the presence of queens on the main and secondary diagonals. The challenge is to determine the relationship between row and column indices for cells on the same main or secondary diagonal.
### 2. &nbsp; Q & A
**Q**: How can we understand the relationship between backtracking and recursion?
Overall, backtracking is a "strategic algorithm," while recursion is more of a "tool."
Overall, backtracking is an "algorithmic strategy," while recursion is more of a "tool."
- Backtracking algorithms are typically based on recursion. However, backtracking is one of the application scenarios of recursion, specifically in search problems.
- The structure of recursion reflects the "sub-problem decomposition" problem-solving paradigm, commonly used in solving problems involving divide and conquer, backtracking, and dynamic programming (memoized recursion).
- The structure of recursion reflects the problem-solving paradigm of "sub-problem decomposition." It is commonly used in solving problems involving divide and conquer, backtracking, and dynamic programming (memoized recursion).

View File

@ -110,7 +110,7 @@ A <u>binary tree</u> is a non-linear data structure that represents the hierarch
val: number;
left: TreeNode | null;
right: TreeNode | null;
constructor(val?: number, left?: TreeNode | null, right?: TreeNode | null) {
this.val = val === undefined ? 0 : val; // Node value
this.left = left === undefined ? null : left; // Reference to left child node
@ -637,7 +637,7 @@ As shown in Figure 7-4, in a <u>perfect binary tree</u>, all levels are complete
### 2. &nbsp; Complete binary tree
As shown in Figure 7-5, a <u>complete binary tree</u> is a binary tree where only the nodes in the bottom level are not completely filled, and the nodes in the bottom level are filled from left to right as much as possible. Please note that a perfect binary tree is also a complete binary tree.
As shown in Figure 7-5, a <u>complete binary tree</u> is a binary tree where only the bottom level is possibly not completely filled, and nodes at the bottom level must be filled continuously from left to right. Note that a perfect binary tree is also a complete binary tree.
![Complete binary tree](binary_tree.assets/complete_binary_tree.png){ class="animation-figure" }

View File

@ -2388,7 +2388,7 @@ comments: true
## 9.2.3 &nbsp; 效率對比
設圖中共有 $n$ 個頂點和 $m$ 條邊,表 9-2 對比了鄰接矩陣和鄰接表的時間效率和空間效率。
設圖中共有 $n$ 個頂點和 $m$ 條邊,表 9-2 對比了鄰接矩陣和鄰接表的時間效率和空間效率。請注意,鄰接表(鏈結串列)對應本文實現,而鄰接表(雜湊表)專指將所有鏈結串列替換為雜湊表後的實現。
<p align="center"> 表 9-2 &nbsp; 鄰接矩陣與鄰接表對比 </p>
@ -2396,9 +2396,9 @@ comments: true
| | 鄰接矩陣 | 鄰接表(鏈結串列) | 鄰接表(雜湊表) |
| ------------ | -------- | -------------- | ---------------- |
| 判斷是否鄰接 | $O(1)$ | $O(m)$ | $O(1)$ |
| 判斷是否鄰接 | $O(1)$ | $O(n)$ | $O(1)$ |
| 新增邊 | $O(1)$ | $O(1)$ | $O(1)$ |
| 刪除邊 | $O(1)$ | $O(m)$ | $O(1)$ |
| 刪除邊 | $O(1)$ | $O(n)$ | $O(1)$ |
| 新增頂點 | $O(n)$ | $O(1)$ | $O(1)$ |
| 刪除頂點 | $O(n^2)$ | $O(n + m)$ | $O(n)$ |
| 記憶體空間佔用 | $O(n^2)$ | $O(n + m)$ | $O(n + m)$ |

View File

@ -669,7 +669,7 @@ comments: true
### 2. &nbsp; 完全二元樹
如圖 7-5 所示,<u>完全二元樹complete binary tree</u>只有最底層的節點未被填滿,且最底層節點儘量靠左填充。請注意,完美二元樹也是一棵完全二元樹。
如圖 7-5 所示,<u>完全二元樹complete binary tree</u>僅允許最底層的節點不完全填滿,且最底層節點必須從左至右依次連續填充。請注意,完美二元樹也是一棵完全二元樹。
![完全二元樹](binary_tree.assets/complete_binary_tree.png){ class="animation-figure" }