mirror of
https://github.com/krahets/hello-algo.git
synced 2025-07-05 21:19:41 +08:00
Bug fixes and improvements (#1348)
* Add "reference" for EN version. Bug fixes. * Unify the figure reference as "the figure below" and "the figure above". Bug fixes. * Format the EN markdown files. * Replace "" with <u></u> for EN version and bug fixes * Fix biary_tree_dfs.png * Fix biary_tree_dfs.png * Fix zh-hant/biary_tree_dfs.png * Fix heap_sort_step1.png * Sync zh and zh-hant versions. * Bug fixes * Fix EN figures * Bug fixes * Fix the figure labels for EN version
This commit is contained in:
@ -4,7 +4,7 @@ In algorithms, the repeated execution of a task is quite common and is closely r
|
||||
|
||||
## Iteration
|
||||
|
||||
"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.
|
||||
<u>Iteration</u> is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.
|
||||
|
||||
### For loops
|
||||
|
||||
@ -16,11 +16,11 @@ The following function uses a `for` loop to perform a summation of $1 + 2 + \dot
|
||||
[file]{iteration}-[class]{}-[func]{for_loop}
|
||||
```
|
||||
|
||||
The flowchart below represents this sum function.
|
||||
The figure below represents this sum function.
|
||||
|
||||

|
||||
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or in other words, it has a "linear relationship." This "linear relationship" is what time complexity describes. This topic will be discussed in more detail in the next section.
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or in other words, it has a linear relationship. **This "linear relationship" is what time complexity describes**. This topic will be discussed in more detail in the next section.
|
||||
|
||||
### While loops
|
||||
|
||||
@ -32,7 +32,7 @@ Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$.
|
||||
[file]{iteration}-[class]{}-[func]{while_loop}
|
||||
```
|
||||
|
||||
**`While` loops provide more flexibility than `for` loops**, especially since they allow for custom initialization and modification of the condition variable at each step.
|
||||
**`while` loops provide more flexibility than `for` loops**, especially since they allow for custom initialization and modification of the condition variable at each step.
|
||||
|
||||
For example, in the following code, the condition variable $i$ is updated twice each round, which would be inconvenient to implement with a `for` loop.
|
||||
|
||||
@ -50,7 +50,7 @@ We can nest one loop structure within another. Below is an example using `for` l
|
||||
[file]{iteration}-[class]{}-[func]{nested_for_loop}
|
||||
```
|
||||
|
||||
The flowchart below represents this nested loop.
|
||||
The figure below represents this nested loop.
|
||||
|
||||

|
||||
|
||||
@ -60,7 +60,7 @@ We can further increase the complexity by adding more nested loops, each level o
|
||||
|
||||
## Recursion
|
||||
|
||||
"Recursion" is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:
|
||||
<u>Recursion</u> is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:
|
||||
|
||||
1. **Calling**: This is where the program repeatedly calls itself, often with progressively smaller or simpler arguments, moving towards the "termination condition."
|
||||
2. **Returning**: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer.
|
||||
@ -106,7 +106,7 @@ In practice, the depth of recursion allowed by programming languages is usually
|
||||
|
||||
### Tail recursion
|
||||
|
||||
Interestingly, **if a function performs its recursive call as the very last step before returning,** it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as "tail recursion."
|
||||
Interestingly, **if a function performs its recursive call as the very last step before returning,** it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as <u>tail recursion</u>.
|
||||
|
||||
- **Regular recursion**: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.
|
||||
- **Tail recursion**: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.
|
||||
@ -117,7 +117,7 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable
|
||||
[file]{recursion}-[class]{}-[func]{tail_recur}
|
||||
```
|
||||
|
||||
The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.
|
||||
The execution process of tail recursion is shown in the figure below. Comparing regular recursion and tail recursion, the point of the summation operation is different.
|
||||
|
||||
- **Regular recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns.
|
||||
- **Tail recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer.
|
||||
@ -147,7 +147,7 @@ Using the recursive relation, and considering the first two numbers as terminati
|
||||
[file]{recursion}-[class]{}-[func]{fib}
|
||||
```
|
||||
|
||||
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$.
|
||||
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated in the figure below, this continuous recursive calling eventually creates a <u>recursion tree</u> with a depth of $n$.
|
||||
|
||||

|
||||
|
||||
|
@ -24,11 +24,11 @@ On the other hand, **conducting a full test is very resource-intensive**. As the
|
||||
|
||||
## Theoretical estimation
|
||||
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as <u>asymptotic complexity analysis</u>, or simply <u>complexity analysis</u>.
|
||||
|
||||
Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. **It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases**. This definition might sound complex, but we can break it down into three key points to understand it better.
|
||||
|
||||
- "Time and space resources" correspond to "time complexity" and "space complexity," respectively.
|
||||
- "Time and space resources" correspond to <u>time complexity</u> and <u>space complexity</u>, respectively.
|
||||
- "As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Space complexity
|
||||
|
||||
"Space complexity" is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".
|
||||
<u>Space complexity</u> is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".
|
||||
|
||||
## Space related to algorithms
|
||||
|
||||
@ -725,12 +725,12 @@ The time complexity of both `loop()` and `recur()` functions is $O(n)$, but thei
|
||||
|
||||
## Common types
|
||||
|
||||
Let the size of the input data be $n$, the following chart displays common types of space complexities (arranged from low to high).
|
||||
Let the size of the input data be $n$, the figure below displays common types of space complexities (arranged from low to high).
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Quadratic Order} < \text{Exponential Order}
|
||||
& O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
& \text{Constant} < \text{Logarithmic} < \text{Linear} < \text{Quadratic} < \text{Exponential}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
@ -754,7 +754,7 @@ Linear order is common in arrays, linked lists, stacks, queues, etc., where the
|
||||
[file]{space_complexity}-[class]{}-[func]{linear}
|
||||
```
|
||||
|
||||
As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
|
||||
As shown in the figure below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{linear_recur}
|
||||
@ -770,7 +770,7 @@ Quadratic order is common in matrices and graphs, where the number of elements i
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic}
|
||||
```
|
||||
|
||||
As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
|
||||
As shown in the figure below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic_recur}
|
||||
@ -780,7 +780,7 @@ As shown below, the recursive depth of this function is $n$, and in each recursi
|
||||
|
||||
### Exponential order $O(2^n)$
|
||||
|
||||
Exponential order is common in binary trees. Observe the below image, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
Exponential order is common in binary trees. Observe the figure below, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{build_tree}
|
||||
|
@ -11,7 +11,7 @@
|
||||
**Time Complexity**
|
||||
|
||||
- Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.
|
||||
- Worst-case time complexity is denoted using big O notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations $T(n)$ as $n$ approaches infinity.
|
||||
- Worst-case time complexity is denoted using big-$O$ notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations $T(n)$ as $n$ approaches infinity.
|
||||
- Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.
|
||||
- Common time complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$, among others.
|
||||
- The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.
|
||||
@ -32,7 +32,7 @@ Theoretically, the space complexity of a tail-recursive function can be optimize
|
||||
|
||||
**Q**: What is the difference between the terms "function" and "method"?
|
||||
|
||||
A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.
|
||||
A <u>function</u> can be executed independently, with all parameters passed explicitly. A <u>method</u> is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.
|
||||
|
||||
Here are some examples from common programming languages:
|
||||
|
||||
|
@ -464,7 +464,7 @@ Let's understand this concept of "time growth trend" with an example. Assume the
|
||||
}
|
||||
```
|
||||
|
||||
The following figure shows the time complexities of these three algorithms.
|
||||
The figure below shows the time complexities of these three algorithms.
|
||||
|
||||
- Algorithm `A` has just one print operation, and its run time does not grow with $n$. Its time complexity is considered "constant order."
|
||||
- Algorithm `B` involves a print operation looping $n$ times, and its run time grows linearly with $n$. Its time complexity is "linear order."
|
||||
@ -661,7 +661,7 @@ $$
|
||||
T(n) = 3 + 2n
|
||||
$$
|
||||
|
||||
Since $T(n)$ is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as $O(n)$. This mathematical notation, known as "big-O notation," represents the "asymptotic upper bound" of the function $T(n)$.
|
||||
Since $T(n)$ is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as $O(n)$. This mathematical notation, known as <u>big-O notation</u>, represents the <u>asymptotic upper bound</u> of the function $T(n)$.
|
||||
|
||||
In essence, time complexity analysis is about finding the asymptotic upper bound of the "number of operations $T(n)$". It has a precise mathematical definition.
|
||||
|
||||
@ -669,7 +669,7 @@ In essence, time complexity analysis is about finding the asymptotic upper bound
|
||||
|
||||
If there exist positive real numbers $c$ and $n_0$ such that for all $n > n_0$, $T(n) \leq c \cdot f(n)$, then $f(n)$ is considered an asymptotic upper bound of $T(n)$, denoted as $T(n) = O(f(n))$.
|
||||
|
||||
As illustrated below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.
|
||||
As shown in the figure below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.
|
||||
|
||||

|
||||
|
||||
@ -951,12 +951,12 @@ The following table illustrates examples of different operation counts and their
|
||||
|
||||
## Common types of time complexity
|
||||
|
||||
Let's consider the input data size as $n$. The common types of time complexities are illustrated below, arranged from lowest to highest:
|
||||
Let's consider the input data size as $n$. The common types of time complexities are shown in the figure below, arranged from lowest to highest:
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!) \newline
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Linear-Logarithmic Order} < \text{Quadratic Order} < \text{Exponential Order} < \text{Factorial Order}
|
||||
& O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!) \newline
|
||||
& \text{Constant} < \text{Log} < \text{Linear} < \text{Linear-Log} < \text{Quadratic} < \text{Exp} < \text{Factorial}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
@ -994,7 +994,7 @@ Quadratic order means the number of operations grows quadratically with the inpu
|
||||
[file]{time_complexity}-[class]{}-[func]{quadratic}
|
||||
```
|
||||
|
||||
The following image compares constant order, linear order, and quadratic order time complexities.
|
||||
The figure below compares constant order, linear order, and quadratic order time complexities.
|
||||
|
||||

|
||||
|
||||
@ -1008,7 +1008,7 @@ For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner l
|
||||
|
||||
Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in $2^n$ cells after $n$ divisions.
|
||||
|
||||
The following image and code simulate the cell division process, with a time complexity of $O(2^n)$:
|
||||
The figure below and code simulate the cell division process, with a time complexity of $O(2^n)$:
|
||||
|
||||
```src
|
||||
[file]{time_complexity}-[class]{}-[func]{exponential}
|
||||
@ -1028,7 +1028,7 @@ Exponential order growth is extremely rapid and is commonly seen in exhaustive s
|
||||
|
||||
In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size $n$, since the size is halved each round, the number of iterations is $\log_2 n$, the inverse function of $2^n$.
|
||||
|
||||
The following image and code simulate the "halving each round" process, with a time complexity of $O(\log_2 n)$, commonly abbreviated as $O(\log n)$:
|
||||
The figure below and code simulate the "halving each round" process, with a time complexity of $O(\log_2 n)$, commonly abbreviated as $O(\log n)$:
|
||||
|
||||
```src
|
||||
[file]{time_complexity}-[class]{}-[func]{logarithmic}
|
||||
@ -1062,7 +1062,7 @@ Linear-logarithmic order often appears in nested loops, with the complexities of
|
||||
[file]{time_complexity}-[class]{}-[func]{linear_log_recur}
|
||||
```
|
||||
|
||||
The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
|
||||
The figure below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
|
||||
|
||||

|
||||
|
||||
@ -1076,7 +1076,7 @@ $$
|
||||
n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
|
||||
$$
|
||||
|
||||
Factorials are typically implemented using recursion. As shown in the image and code below, the first level splits into $n$ branches, the second level into $n - 1$ branches, and so on, stopping after the $n$th level:
|
||||
Factorials are typically implemented using recursion. As shown in the code and the figure below, the first level splits into $n$ branches, the second level into $n - 1$ branches, and so on, stopping after the $n$th level:
|
||||
|
||||
```src
|
||||
[file]{time_complexity}-[class]{}-[func]{factorial_recur}
|
||||
|
Reference in New Issue
Block a user