mirror of
https://github.com/krahets/hello-algo.git
synced 2025-11-02 04:31:55 +08:00
translation: Capitalize all the headers, list headers and figure captions (#1206)
* Capitalize all the headers, list headers and figure captions * Fix the term "LRU" * Fix the names of source code link in avl_tree.md * Capitalize only first letter for nav trees in mkdocs.yml * Update code comments * Update linked_list.md * Update linked_list.md
This commit is contained in:
@ -1,6 +1,6 @@
|
||||
# Complexity Analysis
|
||||
# Complexity analysis
|
||||
|
||||

|
||||

|
||||
|
||||
!!! abstract
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# Iteration and Recursion
|
||||
# Iteration and recursion
|
||||
|
||||
In algorithms, the repeated execution of a task is quite common and is closely related to the analysis of complexity. Therefore, before delving into the concepts of time complexity and space complexity, let's first explore how to implement repetitive tasks in programming. This involves understanding two fundamental programming control structures: iteration and recursion.
|
||||
|
||||
@ -6,7 +6,7 @@ In algorithms, the repeated execution of a task is quite common and is closely r
|
||||
|
||||
"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.
|
||||
|
||||
### For Loops
|
||||
### For loops
|
||||
|
||||
The `for` loop is one of the most common forms of iteration, and **it's particularly suitable when the number of iterations is known in advance**.
|
||||
|
||||
@ -18,11 +18,11 @@ The following function uses a `for` loop to perform a summation of $1 + 2 + \dot
|
||||
|
||||
The flowchart below represents this sum function.
|
||||
|
||||

|
||||

|
||||
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or in other words, it has a "linear relationship." This "linear relationship" is what time complexity describes. This topic will be discussed in more detail in the next section.
|
||||
|
||||
### While Loops
|
||||
### While loops
|
||||
|
||||
Similar to `for` loops, `while` loops are another approach for implementing iteration. In a `while` loop, the program checks a condition at the beginning of each iteration; if the condition is true, the execution continues, otherwise, the loop ends.
|
||||
|
||||
@ -42,7 +42,7 @@ For example, in the following code, the condition variable $i$ is updated twice
|
||||
|
||||
Overall, **`for` loops are more concise, while `while` loops are more flexible**. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem.
|
||||
|
||||
### Nested Loops
|
||||
### Nested loops
|
||||
|
||||
We can nest one loop structure within another. Below is an example using `for` loops:
|
||||
|
||||
@ -52,7 +52,7 @@ We can nest one loop structure within another. Below is an example using `for` l
|
||||
|
||||
The flowchart below represents this nested loop.
|
||||
|
||||

|
||||

|
||||
|
||||
In such cases, the number of operations of the function is proportional to $n^2$, meaning the algorithm's runtime and the size of the input data $n$ has a 'quadratic relationship.'
|
||||
|
||||
@ -79,7 +79,7 @@ Observe the following code, where simply calling the function `recur(n)` can com
|
||||
|
||||
The figure below shows the recursive process of this function.
|
||||
|
||||

|
||||

|
||||
|
||||
Although iteration and recursion can achieve the same results from a computational standpoint, **they represent two entirely different paradigms of thinking and problem-solving**.
|
||||
|
||||
@ -91,7 +91,7 @@ Let's take the earlier example of the summation function, defined as $f(n) = 1 +
|
||||
- **Iteration**: In this approach, we simulate the summation process within a loop. Starting from $1$ and traversing to $n$, we perform the summation operation in each iteration to eventually compute $f(n)$.
|
||||
- **Recursion**: Here, the problem is broken down into a sub-problem: $f(n) = n + f(n-1)$. This decomposition continues recursively until reaching the base case, $f(1) = 1$, at which point the recursion terminates.
|
||||
|
||||
### Call Stack
|
||||
### Call stack
|
||||
|
||||
Every time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, the return address, and other relevant information. This leads to two primary outcomes.
|
||||
|
||||
@ -100,16 +100,16 @@ Every time a recursive function calls itself, the system allocates memory for th
|
||||
|
||||
As shown in the figure below, there are $n$ unreturned recursive functions before triggering the termination condition, indicating a **recursion depth of $n$**.
|
||||
|
||||

|
||||

|
||||
|
||||
In practice, the depth of recursion allowed by programming languages is usually limited, and excessively deep recursion can lead to stack overflow errors.
|
||||
|
||||
### Tail Recursion
|
||||
### Tail recursion
|
||||
|
||||
Interestingly, **if a function performs its recursive call as the very last step before returning,** it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as "tail recursion."
|
||||
|
||||
- **Regular Recursion**: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.
|
||||
- **Tail Recursion**: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.
|
||||
- **Regular recursion**: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.
|
||||
- **Tail recursion**: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.
|
||||
|
||||
For example, in calculating $1 + 2 + \dots + n$, we can make the result variable `res` a parameter of the function, thereby achieving tail recursion:
|
||||
|
||||
@ -119,16 +119,16 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable
|
||||
|
||||
The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.
|
||||
|
||||
- **Regular Recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns.
|
||||
- **Tail Recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer.
|
||||
- **Regular recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns.
|
||||
- **Tail recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer.
|
||||
|
||||

|
||||

|
||||
|
||||
!!! tip
|
||||
|
||||
Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if the function is in the form of tail recursion, it may still encounter stack overflow issues.
|
||||
|
||||
### Recursion Tree
|
||||
### Recursion tree
|
||||
|
||||
When dealing with algorithms related to "divide and conquer", recursion often offers a more intuitive approach and more readable code than iteration. Take the "Fibonacci sequence" as an example.
|
||||
|
||||
@ -149,7 +149,7 @@ Using the recursive relation, and considering the first two numbers as terminati
|
||||
|
||||
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$.
|
||||
|
||||

|
||||

|
||||
|
||||
Fundamentally, recursion embodies the paradigm of "breaking down a problem into smaller sub-problems." This divide-and-conquer strategy is crucial.
|
||||
|
||||
@ -160,7 +160,7 @@ Fundamentally, recursion embodies the paradigm of "breaking down a problem into
|
||||
|
||||
Summarizing the above content, the following table shows the differences between iteration and recursion in terms of implementation, performance, and applicability.
|
||||
|
||||
<p align="center"> Table: Comparison of Iteration and Recursion Characteristics </p>
|
||||
<p align="center"> Table: Comparison of iteration and recursion characteristics </p>
|
||||
|
||||
| | Iteration | Recursion |
|
||||
| ----------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# Algorithm Efficiency Assessment
|
||||
# Algorithm efficiency assessment
|
||||
|
||||
In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
@ -7,14 +7,14 @@ In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.
|
||||
|
||||
- **Time Efficiency**: The speed at which an algorithm runs.
|
||||
- **Space Efficiency**: The size of the memory space occupied by an algorithm.
|
||||
- **Time efficiency**: The speed at which an algorithm runs.
|
||||
- **Space efficiency**: The size of the memory space occupied by an algorithm.
|
||||
|
||||
In short, **our goal is to design data structures and algorithms that are both fast and memory-efficient**. Effectively assessing algorithm efficiency is crucial because only then can we compare various algorithms and guide the process of algorithm design and optimization.
|
||||
|
||||
There are mainly two methods of efficiency assessment: actual testing and theoretical estimation.
|
||||
|
||||
## Actual Testing
|
||||
## Actual testing
|
||||
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.
|
||||
|
||||
@ -22,7 +22,7 @@ On one hand, **it's difficult to eliminate interference from the testing environ
|
||||
|
||||
On the other hand, **conducting a full test is very resource-intensive**. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.
|
||||
|
||||
## Theoretical Estimation
|
||||
## Theoretical estimation
|
||||
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."
|
||||
|
||||
|
||||
@ -1,26 +1,26 @@
|
||||
# Space Complexity
|
||||
# Space complexity
|
||||
|
||||
"Space complexity" is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".
|
||||
|
||||
## Space Related to Algorithms
|
||||
## Space related to algorithms
|
||||
|
||||
The memory space used by an algorithm during its execution mainly includes the following types.
|
||||
|
||||
- **Input Space**: Used to store the input data of the algorithm.
|
||||
- **Temporary Space**: Used to store variables, objects, function contexts, and other data during the algorithm's execution.
|
||||
- **Output Space**: Used to store the output data of the algorithm.
|
||||
- **Input space**: Used to store the input data of the algorithm.
|
||||
- **Temporary space**: Used to store variables, objects, function contexts, and other data during the algorithm's execution.
|
||||
- **Output space**: Used to store the output data of the algorithm.
|
||||
|
||||
Generally, the scope of space complexity statistics includes both "Temporary Space" and "Output Space".
|
||||
|
||||
Temporary space can be further divided into three parts.
|
||||
|
||||
- **Temporary Data**: Used to save various constants, variables, objects, etc., during the algorithm's execution.
|
||||
- **Stack Frame Space**: Used to save the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is released after the function returns.
|
||||
- **Instruction Space**: Used to store compiled program instructions, which are usually negligible in actual statistics.
|
||||
- **Temporary data**: Used to save various constants, variables, objects, etc., during the algorithm's execution.
|
||||
- **Stack frame space**: Used to save the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is released after the function returns.
|
||||
- **Instruction space**: Used to store compiled program instructions, which are usually negligible in actual statistics.
|
||||
|
||||
When analyzing the space complexity of a program, **we typically count the Temporary Data, Stack Frame Space, and Output Data**, as shown in the figure below.
|
||||
|
||||

|
||||

|
||||
|
||||
The relevant code is as follows:
|
||||
|
||||
@ -28,13 +28,13 @@ The relevant code is as follows:
|
||||
|
||||
```python title=""
|
||||
class Node:
|
||||
"""Classes""""
|
||||
"""Classes"""
|
||||
def __init__(self, x: int):
|
||||
self.val: int = x # node value
|
||||
self.next: Node | None = None # reference to the next node
|
||||
|
||||
def function() -> int:
|
||||
""""Functions"""""
|
||||
"""Functions"""
|
||||
# Perform certain operations...
|
||||
return 0
|
||||
|
||||
@ -271,7 +271,7 @@ The relevant code is as follows:
|
||||
next: Option<Rc<RefCell<Node>>>,
|
||||
}
|
||||
|
||||
/* Creating a Node structure */
|
||||
/* Constructor */
|
||||
impl Node {
|
||||
fn new(val: i32) -> Self {
|
||||
Self { val: val, next: None }
|
||||
@ -322,7 +322,7 @@ The relevant code is as follows:
|
||||
|
||||
```
|
||||
|
||||
## Calculation Method
|
||||
## Calculation method
|
||||
|
||||
The method for calculating space complexity is roughly similar to that of time complexity, with the only change being the shift of the statistical object from "number of operations" to "size of used space".
|
||||
|
||||
@ -443,10 +443,10 @@ Consider the following code, the term "worst-case" in worst-case space complexit
|
||||
|
||||
```rust title=""
|
||||
fn algorithm(n: i32) {
|
||||
let a = 0; // O(1)
|
||||
let b = [0; 10000]; // O(1)
|
||||
let a = 0; // O(1)
|
||||
let b = [0; 10000]; // O(1)
|
||||
if n > 10 {
|
||||
let nums = vec![0; n as usize]; // O(n)
|
||||
let nums = vec![0; n as usize]; // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
@ -484,12 +484,12 @@ Consider the following code, the term "worst-case" in worst-case space complexit
|
||||
return 0
|
||||
|
||||
def loop(n: int):
|
||||
"""Loop O(1)"""""
|
||||
"""Loop O(1)"""
|
||||
for _ in range(n):
|
||||
function()
|
||||
|
||||
def recur(n: int):
|
||||
"""Recursion O(n)"""""
|
||||
"""Recursion O(n)"""
|
||||
if n == 1:
|
||||
return
|
||||
return recur(n - 1)
|
||||
@ -723,7 +723,7 @@ The time complexity of both `loop()` and `recur()` functions is $O(n)$, but thei
|
||||
- The `loop()` function calls `function()` $n$ times in a loop, where each iteration's `function()` returns and releases its stack frame space, so the space complexity remains $O(1)$.
|
||||
- The recursive function `recur()` will have $n$ instances of unreturned `recur()` existing simultaneously during its execution, thus occupying $O(n)$ stack frame space.
|
||||
|
||||
## Common Types
|
||||
## Common types
|
||||
|
||||
Let the size of the input data be $n$, the following chart displays common types of space complexities (arranged from low to high).
|
||||
|
||||
@ -734,9 +734,9 @@ O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||

|
||||

|
||||
|
||||
### Constant Order $O(1)$
|
||||
### Constant order $O(1)$
|
||||
|
||||
Constant order is common in constants, variables, objects that are independent of the size of input data $n$.
|
||||
|
||||
@ -746,7 +746,7 @@ Note that memory occupied by initializing variables or calling functions in a lo
|
||||
[file]{space_complexity}-[class]{}-[func]{constant}
|
||||
```
|
||||
|
||||
### Linear Order $O(n)$
|
||||
### Linear order $O(n)$
|
||||
|
||||
Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to $n$:
|
||||
|
||||
@ -760,9 +760,9 @@ As shown below, this function's recursive depth is $n$, meaning there are $n$ in
|
||||
[file]{space_complexity}-[class]{}-[func]{linear_recur}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
### Quadratic Order $O(n^2)$
|
||||
### Quadratic order $O(n^2)$
|
||||
|
||||
Quadratic order is common in matrices and graphs, where the number of elements is quadratic to $n$:
|
||||
|
||||
@ -776,9 +776,9 @@ As shown below, the recursive depth of this function is $n$, and in each recursi
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic_recur}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
### Exponential Order $O(2^n)$
|
||||
### Exponential order $O(2^n)$
|
||||
|
||||
Exponential order is common in binary trees. Observe the below image, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
|
||||
@ -786,15 +786,15 @@ Exponential order is common in binary trees. Observe the below image, a "full bi
|
||||
[file]{space_complexity}-[class]{}-[func]{build_tree}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
### Logarithmic Order $O(\log n)$
|
||||
### Logarithmic order $O(\log n)$
|
||||
|
||||
Logarithmic order is common in divide-and-conquer algorithms. For example, in merge sort, an array of length $n$ is recursively divided in half each round, forming a recursion tree of height $\log n$, using $O(\log n)$ stack frame space.
|
||||
|
||||
Another example is converting a number to a string. Given a positive integer $n$, its number of digits is $\log_{10} n + 1$, corresponding to the length of the string, thus the space complexity is $O(\log_{10} n + 1) = O(\log n)$.
|
||||
|
||||
## Balancing Time and Space
|
||||
## Balancing time and space
|
||||
|
||||
Ideally, we aim for both time complexity and space complexity to be optimal. However, in practice, optimizing both simultaneously is often difficult.
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Summary
|
||||
|
||||
### Key Review
|
||||
### Key review
|
||||
|
||||
**Algorithm Efficiency Assessment**
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# Time Complexity
|
||||
# Time complexity
|
||||
|
||||
Time complexity is a concept used to measure how the run time of an algorithm increases with the size of the input data. Understanding time complexity is crucial for accurately assessing the efficiency of an algorithm.
|
||||
|
||||
@ -200,7 +200,7 @@ $$
|
||||
|
||||
However, in practice, **counting the run time of an algorithm is neither practical nor reasonable**. First, we don't want to tie the estimated time to the running platform, as algorithms need to run on various platforms. Second, it's challenging to know the run time for each type of operation, making the estimation process difficult.
|
||||
|
||||
## Assessing Time Growth Trend
|
||||
## Assessing time growth trend
|
||||
|
||||
Time complexity analysis does not count the algorithm's run time, **but rather the growth trend of the run time as the data volume increases**.
|
||||
|
||||
@ -470,7 +470,7 @@ The following figure shows the time complexities of these three algorithms.
|
||||
- Algorithm `B` involves a print operation looping $n$ times, and its run time grows linearly with $n$. Its time complexity is "linear order."
|
||||
- Algorithm `C` has a print operation looping 1,000,000 times. Although it takes a long time, it is independent of the input data size $n$. Therefore, the time complexity of `C` is the same as `A`, which is "constant order."
|
||||
|
||||

|
||||

|
||||
|
||||
Compared to directly counting the run time of an algorithm, what are the characteristics of time complexity analysis?
|
||||
|
||||
@ -478,7 +478,7 @@ Compared to directly counting the run time of an algorithm, what are the charact
|
||||
- **Time complexity analysis is more straightforward**. Obviously, the running platform and the types of computational operations are irrelevant to the trend of run time growth. Therefore, in time complexity analysis, we can simply treat the execution time of all computational operations as the same "unit time," simplifying the "computational operation run time count" to a "computational operation count." This significantly reduces the complexity of estimation.
|
||||
- **Time complexity has its limitations**. For example, although algorithms `A` and `C` have the same time complexity, their actual run times can be quite different. Similarly, even though algorithm `B` has a higher time complexity than `C`, it is clearly superior when the input data size $n$ is small. In these cases, it's difficult to judge the efficiency of algorithms based solely on time complexity. Nonetheless, despite these issues, complexity analysis remains the most effective and commonly used method for evaluating algorithm efficiency.
|
||||
|
||||
## Asymptotic Upper Bound
|
||||
## Asymptotic upper bound
|
||||
|
||||
Consider a function with an input size of $n$:
|
||||
|
||||
@ -671,15 +671,15 @@ In essence, time complexity analysis is about finding the asymptotic upper bound
|
||||
|
||||
As illustrated below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.
|
||||
|
||||

|
||||

|
||||
|
||||
## Calculation Method
|
||||
## Calculation method
|
||||
|
||||
While the concept of asymptotic upper bound might seem mathematically dense, you don't need to fully grasp it right away. Let's first understand the method of calculation, which can be practiced and comprehended over time.
|
||||
|
||||
Once $f(n)$ is determined, we obtain the time complexity $O(f(n))$. But how do we determine the asymptotic upper bound $f(n)$? This process generally involves two steps: counting the number of operations and determining the asymptotic upper bound.
|
||||
|
||||
### Step 1: Counting the Number of Operations
|
||||
### Step 1: counting the number of operations
|
||||
|
||||
This step involves going through the code line by line. However, due to the presence of the constant $c$ in $c \cdot f(n)$, **all coefficients and constant terms in $T(n)$ can be ignored**. This principle allows for simplification techniques in counting operations.
|
||||
|
||||
@ -933,13 +933,13 @@ T(n) & = n^2 + n & \text{Simplified Count (o.O)}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
### Step 2: Determining the Asymptotic Upper Bound
|
||||
### Step 2: determining the asymptotic upper bound
|
||||
|
||||
**The time complexity is determined by the highest order term in $T(n)$**. This is because, as $n$ approaches infinity, the highest order term dominates, rendering the influence of other terms negligible.
|
||||
|
||||
The following table illustrates examples of different operation counts and their corresponding time complexities. Some exaggerated values are used to emphasize that coefficients cannot alter the order of growth. When $n$ becomes very large, these constants become insignificant.
|
||||
|
||||
<p align="center"> Table: Time Complexity for Different Operation Counts </p>
|
||||
<p align="center"> Table: Time complexity for different operation counts </p>
|
||||
|
||||
| Operation Count $T(n)$ | Time Complexity $O(f(n))$ |
|
||||
| ---------------------- | ------------------------- |
|
||||
@ -949,7 +949,7 @@ The following table illustrates examples of different operation counts and their
|
||||
| $n^3 + 10000n^2$ | $O(n^3)$ |
|
||||
| $2^n + 10000n^{10000}$ | $O(2^n)$ |
|
||||
|
||||
## Common Types of Time Complexity
|
||||
## Common types of time complexity
|
||||
|
||||
Let's consider the input data size as $n$. The common types of time complexities are illustrated below, arranged from lowest to highest:
|
||||
|
||||
@ -960,9 +960,9 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!) \newline
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||

|
||||

|
||||
|
||||
### Constant Order $O(1)$
|
||||
### Constant order $O(1)$
|
||||
|
||||
Constant order means the number of operations is independent of the input data size $n$. In the following function, although the number of operations `size` might be large, the time complexity remains $O(1)$ as it's unrelated to $n$:
|
||||
|
||||
@ -970,7 +970,7 @@ Constant order means the number of operations is independent of the input data s
|
||||
[file]{time_complexity}-[class]{}-[func]{constant}
|
||||
```
|
||||
|
||||
### Linear Order $O(n)$
|
||||
### Linear order $O(n)$
|
||||
|
||||
Linear order indicates the number of operations grows linearly with the input data size $n$. Linear order commonly appears in single-loop structures:
|
||||
|
||||
@ -986,7 +986,7 @@ Operations like array traversal and linked list traversal have a time complexity
|
||||
|
||||
It's important to note that **the input data size $n$ should be determined based on the type of input data**. For example, in the first example, $n$ represents the input data size, while in the second example, the length of the array $n$ is the data size.
|
||||
|
||||
### Quadratic Order $O(n^2)$
|
||||
### Quadratic order $O(n^2)$
|
||||
|
||||
Quadratic order means the number of operations grows quadratically with the input data size $n$. Quadratic order typically appears in nested loops, where both the outer and inner loops have a time complexity of $O(n)$, resulting in an overall complexity of $O(n^2)$:
|
||||
|
||||
@ -996,7 +996,7 @@ Quadratic order means the number of operations grows quadratically with the inpu
|
||||
|
||||
The following image compares constant order, linear order, and quadratic order time complexities.
|
||||
|
||||

|
||||

|
||||
|
||||
For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner loop runs $n-1$, $n-2$, ..., $2$, $1$ times, averaging $n / 2$ times, resulting in a time complexity of $O((n - 1) n / 2) = O(n^2)$:
|
||||
|
||||
@ -1004,7 +1004,7 @@ For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner l
|
||||
[file]{time_complexity}-[class]{}-[func]{bubble_sort}
|
||||
```
|
||||
|
||||
### Exponential Order $O(2^n)$
|
||||
### Exponential order $O(2^n)$
|
||||
|
||||
Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in $2^n$ cells after $n$ divisions.
|
||||
|
||||
@ -1014,7 +1014,7 @@ The following image and code simulate the cell division process, with a time com
|
||||
[file]{time_complexity}-[class]{}-[func]{exponential}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
In practice, exponential order often appears in recursive functions. For example, in the code below, it recursively splits into two halves, stopping after $n$ divisions:
|
||||
|
||||
@ -1024,7 +1024,7 @@ In practice, exponential order often appears in recursive functions. For example
|
||||
|
||||
Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions.
|
||||
|
||||
### Logarithmic Order $O(\log n)$
|
||||
### Logarithmic order $O(\log n)$
|
||||
|
||||
In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size $n$, since the size is halved each round, the number of iterations is $\log_2 n$, the inverse function of $2^n$.
|
||||
|
||||
@ -1034,7 +1034,7 @@ The following image and code simulate the "halving each round" process, with a t
|
||||
[file]{time_complexity}-[class]{}-[func]{logarithmic}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
Like exponential order, logarithmic order also frequently appears in recursive functions. The code below forms a recursive tree of height $\log_2 n$:
|
||||
|
||||
@ -1054,7 +1054,7 @@ Logarithmic order is typical in algorithms based on the divide-and-conquer strat
|
||||
|
||||
This means the base $m$ can be changed without affecting the complexity. Therefore, we often omit the base $m$ and simply denote logarithmic order as $O(\log n)$.
|
||||
|
||||
### Linear-Logarithmic Order $O(n \log n)$
|
||||
### Linear-logarithmic order $O(n \log n)$
|
||||
|
||||
Linear-logarithmic order often appears in nested loops, with the complexities of the two loops being $O(\log n)$ and $O(n)$ respectively. The related code is as follows:
|
||||
|
||||
@ -1064,11 +1064,11 @@ Linear-logarithmic order often appears in nested loops, with the complexities of
|
||||
|
||||
The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
|
||||
|
||||

|
||||

|
||||
|
||||
Mainstream sorting algorithms typically have a time complexity of $O(n \log n)$, such as quicksort, mergesort, and heapsort.
|
||||
|
||||
### Factorial Order $O(n!)$
|
||||
### Factorial order $O(n!)$
|
||||
|
||||
Factorial order corresponds to the mathematical problem of "full permutation." Given $n$ distinct elements, the total number of possible permutations is:
|
||||
|
||||
@ -1082,11 +1082,11 @@ Factorials are typically implemented using recursion. As shown in the image and
|
||||
[file]{time_complexity}-[class]{}-[func]{factorial_recur}
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
Note that factorial order grows even faster than exponential order; it's unacceptable for larger $n$ values.
|
||||
|
||||
## Worst, Best, and Average Time Complexities
|
||||
## Worst, best, and average time complexities
|
||||
|
||||
**The time efficiency of an algorithm is often not fixed but depends on the distribution of the input data**. Assume we have an array `nums` of length $n$, consisting of numbers from $1$ to $n$, each appearing only once, but in a randomly shuffled order. The task is to return the index of the element $1$. We can draw the following conclusions:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user