Several bug fixes and improvements (#1178)
* Update pythontutor block with the latest code * Move docs-en to en/docs * Move mkdocs.yml and README to en folder * Fix en/mkdocs.yml * Update the landing page * Fix the glossary * Reduce the font size of the code block tabs * Add Kotlin blocks to en/docs * Fix the code link in en/.../deque.md * Fix the EN README link
13
en/docs/chapter_computational_complexity/index.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Complexity Analysis
|
||||
|
||||
<div class="center-table" markdown>
|
||||
|
||||

|
||||
|
||||
</div>
|
||||
|
||||
!!! abstract
|
||||
|
||||
Complexity analysis is like a space-time navigator in the vast universe of algorithms.
|
||||
|
||||
It guides us in exploring deeper within the the dimensions of time and space, seeking more elegant solutions.
|
After Width: | Height: | Size: 10 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 24 KiB |
@ -0,0 +1,194 @@
|
||||
# Iteration and Recursion
|
||||
|
||||
In algorithms, the repeated execution of a task is quite common and is closely related to the analysis of complexity. Therefore, before delving into the concepts of time complexity and space complexity, let's first explore how to implement repetitive tasks in programming. This involves understanding two fundamental programming control structures: iteration and recursion.
|
||||
|
||||
## Iteration
|
||||
|
||||
"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.
|
||||
|
||||
### For Loops
|
||||
|
||||
The `for` loop is one of the most common forms of iteration, and **it's particularly suitable when the number of iterations is known in advance**.
|
||||
|
||||
The following function uses a `for` loop to perform a summation of $1 + 2 + \dots + n$, with the sum being stored in the variable `res`. It's important to note that in Python, `range(a, b)` creates an interval that is inclusive of `a` but exclusive of `b`, meaning it iterates over the range from $a$ up to $b−1$.
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{for_loop}
|
||||
```
|
||||
|
||||
The flowchart below represents this sum function.
|
||||
|
||||

|
||||
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or in other words, it has a "linear relationship." This "linear relationship" is what time complexity describes. This topic will be discussed in more detail in the next section.
|
||||
|
||||
### While Loops
|
||||
|
||||
Similar to `for` loops, `while` loops are another approach for implementing iteration. In a `while` loop, the program checks a condition at the beginning of each iteration; if the condition is true, the execution continues, otherwise, the loop ends.
|
||||
|
||||
Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$.
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{while_loop}
|
||||
```
|
||||
|
||||
**`While` loops provide more flexibility than `for` loops**, especially since they allow for custom initialization and modification of the condition variable at each step.
|
||||
|
||||
For example, in the following code, the condition variable $i$ is updated twice each round, which would be inconvenient to implement with a `for` loop.
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{while_loop_ii}
|
||||
```
|
||||
|
||||
Overall, **`for` loops are more concise, while `while` loops are more flexible**. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem.
|
||||
|
||||
### Nested Loops
|
||||
|
||||
We can nest one loop structure within another. Below is an example using `for` loops:
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{nested_for_loop}
|
||||
```
|
||||
|
||||
The flowchart below represents this nested loop.
|
||||
|
||||

|
||||
|
||||
In such cases, the number of operations of the function is proportional to $n^2$, meaning the algorithm's runtime and the size of the input data $n$ has a 'quadratic relationship.'
|
||||
|
||||
We can further increase the complexity by adding more nested loops, each level of nesting effectively "increasing the dimension," which raises the time complexity to "cubic," "quartic," and so on.
|
||||
|
||||
## Recursion
|
||||
|
||||
"Recursion" is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:
|
||||
|
||||
1. **Calling**: This is where the program repeatedly calls itself, often with progressively smaller or simpler arguments, moving towards the "termination condition."
|
||||
2. **Returning**: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer.
|
||||
|
||||
From an implementation perspective, recursive code mainly includes three elements.
|
||||
|
||||
1. **Termination Condition**: Determines when to switch from "calling" to "returning."
|
||||
2. **Recursive Call**: Corresponds to "calling," where the function calls itself, usually with smaller or more simplified parameters.
|
||||
3. **Return Result**: Corresponds to "returning," where the result of the current recursion level is returned to the previous layer.
|
||||
|
||||
Observe the following code, where simply calling the function `recur(n)` can compute the sum of $1 + 2 + \dots + n$:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{recur}
|
||||
```
|
||||
|
||||
The figure below shows the recursive process of this function.
|
||||
|
||||

|
||||
|
||||
Although iteration and recursion can achieve the same results from a computational standpoint, **they represent two entirely different paradigms of thinking and problem-solving**.
|
||||
|
||||
- **Iteration**: Solves problems "from the bottom up." It starts with the most basic steps, and then repeatedly adds or accumulates these steps until the task is complete.
|
||||
- **Recursion**: Solves problems "from the top down." It breaks down the original problem into smaller sub-problems, each of which has the same form as the original problem. These sub-problems are then further decomposed into even smaller sub-problems, stopping at the base case whose solution is known.
|
||||
|
||||
Let's take the earlier example of the summation function, defined as $f(n) = 1 + 2 + \dots + n$.
|
||||
|
||||
- **Iteration**: In this approach, we simulate the summation process within a loop. Starting from $1$ and traversing to $n$, we perform the summation operation in each iteration to eventually compute $f(n)$.
|
||||
- **Recursion**: Here, the problem is broken down into a sub-problem: $f(n) = n + f(n-1)$. This decomposition continues recursively until reaching the base case, $f(1) = 1$, at which point the recursion terminates.
|
||||
|
||||
### Call Stack
|
||||
|
||||
Every time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, the return address, and other relevant information. This leads to two primary outcomes.
|
||||
|
||||
- The function's context data is stored in a memory area called "stack frame space" and is only released after the function returns. Therefore, **recursion generally consumes more memory space than iteration**.
|
||||
- Recursive calls introduce additional overhead. **Hence, recursion is usually less time-efficient than loops.**
|
||||
|
||||
As shown in the figure below, there are $n$ unreturned recursive functions before triggering the termination condition, indicating a **recursion depth of $n$**.
|
||||
|
||||

|
||||
|
||||
In practice, the depth of recursion allowed by programming languages is usually limited, and excessively deep recursion can lead to stack overflow errors.
|
||||
|
||||
### Tail Recursion
|
||||
|
||||
Interestingly, **if a function performs its recursive call as the very last step before returning,** it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as "tail recursion."
|
||||
|
||||
- **Regular Recursion**: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.
|
||||
- **Tail Recursion**: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.
|
||||
|
||||
For example, in calculating $1 + 2 + \dots + n$, we can make the result variable `res` a parameter of the function, thereby achieving tail recursion:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{tail_recur}
|
||||
```
|
||||
|
||||
The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.
|
||||
|
||||
- **Regular Recursion**: The summation operation occurs during the "returning" phase, requiring another summation after each layer returns.
|
||||
- **Tail Recursion**: The summation operation occurs during the "calling" phase, and the "returning" phase only involves returning through each layer.
|
||||
|
||||

|
||||
|
||||
!!! tip
|
||||
|
||||
Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if the function is in the form of tail recursion, it may still encounter stack overflow issues.
|
||||
|
||||
### Recursion Tree
|
||||
|
||||
When dealing with algorithms related to "divide and conquer", recursion often offers a more intuitive approach and more readable code than iteration. Take the "Fibonacci sequence" as an example.
|
||||
|
||||
!!! question
|
||||
|
||||
Given a Fibonacci sequence $0, 1, 1, 2, 3, 5, 8, 13, \dots$, find the $n$th number in the sequence.
|
||||
|
||||
Let the $n$th number of the Fibonacci sequence be $f(n)$, it's easy to deduce two conclusions:
|
||||
|
||||
- The first two numbers of the sequence are $f(1) = 0$ and $f(2) = 1$.
|
||||
- Each number in the sequence is the sum of the two preceding ones, that is, $f(n) = f(n - 1) + f(n - 2)$.
|
||||
|
||||
Using the recursive relation, and considering the first two numbers as termination conditions, we can write the recursive code. Calling `fib(n)` will yield the $n$th number of the Fibonacci sequence:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{fib}
|
||||
```
|
||||
|
||||
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$.
|
||||
|
||||

|
||||
|
||||
Fundamentally, recursion embodies the paradigm of "breaking down a problem into smaller sub-problems." This divide-and-conquer strategy is crucial.
|
||||
|
||||
- From an algorithmic perspective, many important strategies like searching, sorting, backtracking, divide-and-conquer, and dynamic programming directly or indirectly use this way of thinking.
|
||||
- From a data structure perspective, recursion is naturally suited for dealing with linked lists, trees, and graphs, as they are well suited for analysis using the divide-and-conquer approach.
|
||||
|
||||
## Comparison
|
||||
|
||||
Summarizing the above content, the following table shows the differences between iteration and recursion in terms of implementation, performance, and applicability.
|
||||
|
||||
<p align="center"> Table: Comparison of Iteration and Recursion Characteristics </p>
|
||||
|
||||
| | Iteration | Recursion |
|
||||
| ----------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Approach | Loop structure | Function calls itself |
|
||||
| Time Efficiency | Generally higher efficiency, no function call overhead | Each function call generates overhead |
|
||||
| Memory Usage | Typically uses a fixed size of memory space | Accumulative function calls can use a substantial amount of stack frame space |
|
||||
| Suitable Problems | Suitable for simple loop tasks, intuitive and readable code | Suitable for problem decomposition, like trees, graphs, divide-and-conquer, backtracking, etc., concise and clear code structure |
|
||||
|
||||
!!! tip
|
||||
|
||||
If you find the following content difficult to understand, consider revisiting it after reading the "Stack" chapter.
|
||||
|
||||
So, what is the intrinsic connection between iteration and recursion? Taking the above recursive function as an example, the summation operation occurs during the recursion's "return" phase. This means that the initially called function is the last to complete its summation operation, **mirroring the "last in, first out" principle of a stack**.
|
||||
|
||||
Recursive terms like "call stack" and "stack frame space" hint at the close relationship between recursion and stacks.
|
||||
|
||||
1. **Calling**: When a function is called, the system allocates a new stack frame on the "call stack" for that function, storing local variables, parameters, return addresses, and other data.
|
||||
2. **Returning**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack," restoring the execution environment of the previous function.
|
||||
|
||||
Therefore, **we can use an explicit stack to simulate the behavior of the call stack**, thus transforming recursion into an iterative form:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{for_loop_recur}
|
||||
```
|
||||
|
||||
Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons:
|
||||
|
||||
- The transformed code may become more challenging to understand and less readable.
|
||||
- For some complex problems, simulating the behavior of the system's call stack can be quite challenging.
|
||||
|
||||
In conclusion, **whether to choose iteration or recursion depends on the specific nature of the problem**. In programming practice, it's crucial to weigh the pros and cons of both and choose the most suitable approach for the situation at hand.
|
@ -0,0 +1,48 @@
|
||||
# Algorithm Efficiency Assessment
|
||||
|
||||
In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
1. **Finding a Solution to the Problem**: The algorithm should reliably find the correct solution within the stipulated range of inputs.
|
||||
2. **Seeking the Optimal Solution**: For the same problem, multiple solutions might exist, and we aim to find the most efficient algorithm possible.
|
||||
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.
|
||||
|
||||
- **Time Efficiency**: The speed at which an algorithm runs.
|
||||
- **Space Efficiency**: The size of the memory space occupied by an algorithm.
|
||||
|
||||
In short, **our goal is to design data structures and algorithms that are both fast and memory-efficient**. Effectively assessing algorithm efficiency is crucial because only then can we compare various algorithms and guide the process of algorithm design and optimization.
|
||||
|
||||
There are mainly two methods of efficiency assessment: actual testing and theoretical estimation.
|
||||
|
||||
## Actual Testing
|
||||
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.
|
||||
|
||||
On one hand, **it's difficult to eliminate interference from the testing environment**. Hardware configurations can affect algorithm performance. For example, algorithm `A` might run faster than `B` on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.
|
||||
|
||||
On the other hand, **conducting a full test is very resource-intensive**. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.
|
||||
|
||||
## Theoretical Estimation
|
||||
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."
|
||||
|
||||
Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. **It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases**. This definition might sound complex, but we can break it down into three key points to understand it better.
|
||||
|
||||
- "Time and space resources" correspond to "time complexity" and "space complexity," respectively.
|
||||
- "As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.
|
||||
|
||||
**Complexity analysis overcomes the disadvantages of actual testing methods**, reflected in the following aspects:
|
||||
|
||||
- It is independent of the testing environment and applicable to all operating platforms.
|
||||
- It can reflect algorithm efficiency under different data volumes, especially in the performance of algorithms with large data volumes.
|
||||
|
||||
!!! tip
|
||||
|
||||
If you're still confused about the concept of complexity, don't worry. We will introduce it in detail in subsequent chapters.
|
||||
|
||||
Complexity analysis provides us with a "ruler" to measure the time and space resources needed to execute an algorithm and compare the efficiency between different algorithms.
|
||||
|
||||
Complexity is a mathematical concept and may be abstract and challenging for beginners. From this perspective, complexity analysis might not be the best content to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.
|
||||
|
||||
In summary, it's recommended that you establish a preliminary understanding of complexity analysis before diving deep into data structures and algorithms, **so that you can carry out simple complexity analyses of algorithms**.
|
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 15 KiB |
803
en/docs/chapter_computational_complexity/space_complexity.md
Normal file
@ -0,0 +1,803 @@
|
||||
# Space Complexity
|
||||
|
||||
"Space complexity" is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".
|
||||
|
||||
## Space Related to Algorithms
|
||||
|
||||
The memory space used by an algorithm during its execution mainly includes the following types.
|
||||
|
||||
- **Input Space**: Used to store the input data of the algorithm.
|
||||
- **Temporary Space**: Used to store variables, objects, function contexts, and other data during the algorithm's execution.
|
||||
- **Output Space**: Used to store the output data of the algorithm.
|
||||
|
||||
Generally, the scope of space complexity statistics includes both "Temporary Space" and "Output Space".
|
||||
|
||||
Temporary space can be further divided into three parts.
|
||||
|
||||
- **Temporary Data**: Used to save various constants, variables, objects, etc., during the algorithm's execution.
|
||||
- **Stack Frame Space**: Used to save the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is released after the function returns.
|
||||
- **Instruction Space**: Used to store compiled program instructions, which are usually negligible in actual statistics.
|
||||
|
||||
When analyzing the space complexity of a program, **we typically count the Temporary Data, Stack Frame Space, and Output Data**, as shown in the figure below.
|
||||
|
||||

|
||||
|
||||
The relevant code is as follows:
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
class Node:
|
||||
"""Classes""""
|
||||
def __init__(self, x: int):
|
||||
self.val: int = x # node value
|
||||
self.next: Node | None = None # reference to the next node
|
||||
|
||||
def function() -> int:
|
||||
""""Functions"""""
|
||||
# Perform certain operations...
|
||||
return 0
|
||||
|
||||
def algorithm(n) -> int: # input data
|
||||
A = 0 # temporary data (constant, usually in uppercase)
|
||||
b = 0 # temporary data (variable)
|
||||
node = Node(0) # temporary data (object)
|
||||
c = function() # Stack frame space (call function)
|
||||
return A + b + c # output data
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
/* Structures */
|
||||
struct Node {
|
||||
int val;
|
||||
Node *next;
|
||||
Node(int x) : val(x), next(nullptr) {}
|
||||
};
|
||||
|
||||
/* Functions */
|
||||
int func() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node* node = new Node(0); // temporary data (object)
|
||||
int c = func(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(int x) { val = x; }
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
final int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = new Node(0); // temporary data (object)
|
||||
int c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(int x) { val = x; }
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int Function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int Algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = new(0); // temporary data (object)
|
||||
int c = Function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
/* Structures */
|
||||
type node struct {
|
||||
val int
|
||||
next *node
|
||||
}
|
||||
|
||||
/* Create node structure */
|
||||
func newNode(val int) *node {
|
||||
return &node{val: val}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
func function() int {
|
||||
// Perform certain operations...
|
||||
return 0
|
||||
}
|
||||
|
||||
func algorithm(n int) int { // input data
|
||||
const a = 0 // temporary data (constant)
|
||||
b := 0 // temporary storage of data (variable)
|
||||
newNode(0) // temporary data (object)
|
||||
c := function() // stack frame space (call function)
|
||||
return a + b + c // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
var val: Int
|
||||
var next: Node?
|
||||
|
||||
init(x: Int) {
|
||||
val = x
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
func function() -> Int {
|
||||
// Perform certain operations...
|
||||
return 0
|
||||
}
|
||||
|
||||
func algorithm(n: Int) -> Int { // input data
|
||||
let a = 0 // temporary data (constant)
|
||||
var b = 0 // temporary data (variable)
|
||||
let node = Node(x: 0) // temporary data (object)
|
||||
let c = function() // stack frame space (call function)
|
||||
return a + b + c // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
val;
|
||||
next;
|
||||
constructor(val) {
|
||||
this.val = val === undefined ? 0 : val; // node value
|
||||
this.next = null; // reference to the next node
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
function constFunc() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
|
||||
function algorithm(n) { // input data
|
||||
const a = 0; // temporary data (constant)
|
||||
let b = 0; // temporary data (variable)
|
||||
const node = new Node(0); // temporary data (object)
|
||||
const c = constFunc(); // Stack frame space (calling function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
val: number;
|
||||
next: Node | null;
|
||||
constructor(val?: number) {
|
||||
this.val = val === undefined ? 0 : val; // node value
|
||||
this.next = null; // reference to the next node
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
function constFunc(): number {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
|
||||
function algorithm(n: number): number { // input data
|
||||
const a = 0; // temporary data (constant)
|
||||
let b = 0; // temporary data (variable)
|
||||
const node = new Node(0); // temporary data (object)
|
||||
const c = constFunc(); // Stack frame space (calling function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(this.val, [this.next]);
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = Node(0); // temporary data (object)
|
||||
int c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
use std::rc::Rc;
|
||||
use std::cell::RefCell;
|
||||
|
||||
/* Structures */
|
||||
struct Node {
|
||||
val: i32,
|
||||
next: Option<Rc<RefCell<Node>>>,
|
||||
}
|
||||
|
||||
/* Creating a Node structure */
|
||||
impl Node {
|
||||
fn new(val: i32) -> Self {
|
||||
Self { val: val, next: None }
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
fn function() -> i32 {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
fn algorithm(n: i32) -> i32 { // input data
|
||||
const a: i32 = 0; // temporary data (constant)
|
||||
let mut b = 0; // temporary data (variable)
|
||||
let node = Node::new(0); // temporary data (object)
|
||||
let c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
/* Functions */
|
||||
int func() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
int c = func(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Kotlin"
|
||||
|
||||
```kotlin title=""
|
||||
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
## Calculation Method
|
||||
|
||||
The method for calculating space complexity is roughly similar to that of time complexity, with the only change being the shift of the statistical object from "number of operations" to "size of used space".
|
||||
|
||||
However, unlike time complexity, **we usually only focus on the worst-case space complexity**. This is because memory space is a hard requirement, and we must ensure that there is enough memory space reserved under all input data.
|
||||
|
||||
Consider the following code, the term "worst-case" in worst-case space complexity has two meanings.
|
||||
|
||||
1. **Based on the worst input data**: When $n < 10$, the space complexity is $O(1)$; but when $n > 10$, the initialized array `nums` occupies $O(n)$ space, thus the worst-case space complexity is $O(n)$.
|
||||
2. **Based on the peak memory used during the algorithm's execution**: For example, before executing the last line, the program occupies $O(1)$ space; when initializing the array `nums`, the program occupies $O(n)$ space, hence the worst-case space complexity is $O(n)$.
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
def algorithm(n: int):
|
||||
a = 0 # O(1)
|
||||
b = [0] * 10000 # O(1)
|
||||
if n > 10:
|
||||
nums = [0] * n # O(n)
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
vector<int> b(10000); // O(1)
|
||||
if (n > 10)
|
||||
vector<int> nums(n); // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int[] b = new int[10000]; // O(1)
|
||||
if (n > 10)
|
||||
int[] nums = new int[n]; // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
void Algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int[] b = new int[10000]; // O(1)
|
||||
if (n > 10) {
|
||||
int[] nums = new int[n]; // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
func algorithm(n int) {
|
||||
a := 0 // O(1)
|
||||
b := make([]int, 10000) // O(1)
|
||||
var nums []int
|
||||
if n > 10 {
|
||||
nums := make([]int, n) // O(n)
|
||||
}
|
||||
fmt.Println(a, b, nums)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
func algorithm(n: Int) {
|
||||
let a = 0 // O(1)
|
||||
let b = Array(repeating: 0, count: 10000) // O(1)
|
||||
if n > 10 {
|
||||
let nums = Array(repeating: 0, count: n) // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
function algorithm(n) {
|
||||
const a = 0; // O(1)
|
||||
const b = new Array(10000); // O(1)
|
||||
if (n > 10) {
|
||||
const nums = new Array(n); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
function algorithm(n: number): void {
|
||||
const a = 0; // O(1)
|
||||
const b = new Array(10000); // O(1)
|
||||
if (n > 10) {
|
||||
const nums = new Array(n); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
List<int> b = List.filled(10000, 0); // O(1)
|
||||
if (n > 10) {
|
||||
List<int> nums = List.filled(n, 0); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
fn algorithm(n: i32) {
|
||||
let a = 0; // O(1)
|
||||
let b = [0; 10000]; // O(1)
|
||||
if n > 10 {
|
||||
let nums = vec![0; n as usize]; // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int b[10000]; // O(1)
|
||||
if (n > 10)
|
||||
int nums[n] = {0}; // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Kotlin"
|
||||
|
||||
```kotlin title=""
|
||||
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
**In recursive functions, stack frame space must be taken into count**. Consider the following code:
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
def function() -> int:
|
||||
# Perform certain operations
|
||||
return 0
|
||||
|
||||
def loop(n: int):
|
||||
"""Loop O(1)"""""
|
||||
for _ in range(n):
|
||||
function()
|
||||
|
||||
def recur(n: int):
|
||||
"""Recursion O(n)"""""
|
||||
if n == 1:
|
||||
return
|
||||
return recur(n - 1)
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
int func() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
func();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
int function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
int Function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void Loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
Function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
int Recur(int n) {
|
||||
if (n == 1) return 1;
|
||||
return Recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
func function() int {
|
||||
// Perform certain operations
|
||||
return 0
|
||||
}
|
||||
|
||||
/* Cycle O(1) */
|
||||
func loop(n int) {
|
||||
for i := 0; i < n; i++ {
|
||||
function()
|
||||
}
|
||||
}
|
||||
|
||||
/* Recursion O(n) */
|
||||
func recur(n int) {
|
||||
if n == 1 {
|
||||
return
|
||||
}
|
||||
recur(n - 1)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
@discardableResult
|
||||
func function() -> Int {
|
||||
// Perform certain operations
|
||||
return 0
|
||||
}
|
||||
|
||||
/* Cycle O(1) */
|
||||
func loop(n: Int) {
|
||||
for _ in 0 ..< n {
|
||||
function()
|
||||
}
|
||||
}
|
||||
|
||||
/* Recursion O(n) */
|
||||
func recur(n: Int) {
|
||||
if n == 1 {
|
||||
return
|
||||
}
|
||||
recur(n: n - 1)
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
function constFunc() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
function loop(n) {
|
||||
for (let i = 0; i < n; i++) {
|
||||
constFunc();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
function recur(n) {
|
||||
if (n === 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
function constFunc(): number {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
function loop(n: number): void {
|
||||
for (let i = 0; i < n; i++) {
|
||||
constFunc();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
function recur(n: number): void {
|
||||
if (n === 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
int function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
fn function() -> i32 {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
fn loop(n: i32) {
|
||||
for i in 0..n {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(n: i32) {
|
||||
if n == 1 {
|
||||
return;
|
||||
}
|
||||
recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
int func() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
func();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Kotlin"
|
||||
|
||||
```kotlin title=""
|
||||
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
The time complexity of both `loop()` and `recur()` functions is $O(n)$, but their space complexities differ.
|
||||
|
||||
- The `loop()` function calls `function()` $n$ times in a loop, where each iteration's `function()` returns and releases its stack frame space, so the space complexity remains $O(1)$.
|
||||
- The recursive function `recur()` will have $n$ instances of unreturned `recur()` existing simultaneously during its execution, thus occupying $O(n)$ stack frame space.
|
||||
|
||||
## Common Types
|
||||
|
||||
Let the size of the input data be $n$, the following chart displays common types of space complexities (arranged from low to high).
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Quadratic Order} < \text{Exponential Order}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||

|
||||
|
||||
### Constant Order $O(1)$ {data-toc-label="Constant Order"}
|
||||
|
||||
Constant order is common in constants, variables, objects that are independent of the size of input data $n$.
|
||||
|
||||
Note that memory occupied by initializing variables or calling functions in a loop, which is released upon entering the next cycle, does not accumulate over space, thus the space complexity remains $O(1)$:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{constant}
|
||||
```
|
||||
|
||||
### Linear Order $O(n)$ {data-toc-label="Linear Order"}
|
||||
|
||||
Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to $n$:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{linear}
|
||||
```
|
||||
|
||||
As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{linear_recur}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Quadratic Order $O(n^2)$ {data-toc-label="Quadratic Order"}
|
||||
|
||||
Quadratic order is common in matrices and graphs, where the number of elements is quadratic to $n$:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic}
|
||||
```
|
||||
|
||||
As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic_recur}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Exponential Order $O(2^n)$ {data-toc-label="Exponential Order"}
|
||||
|
||||
Exponential order is common in binary trees. Observe the below image, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{build_tree}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Logarithmic Order $O(\log n)$ {data-toc-label="Logarithmic Order"}
|
||||
|
||||
Logarithmic order is common in divide-and-conquer algorithms. For example, in merge sort, an array of length $n$ is recursively divided in half each round, forming a recursion tree of height $\log n$, using $O(\log n)$ stack frame space.
|
||||
|
||||
Another example is converting a number to a string. Given a positive integer $n$, its number of digits is $\log_{10} n + 1$, corresponding to the length of the string, thus the space complexity is $O(\log_{10} n + 1) = O(\log n)$.
|
||||
|
||||
## Balancing Time and Space
|
||||
|
||||
Ideally, we aim for both time complexity and space complexity to be optimal. However, in practice, optimizing both simultaneously is often difficult.
|
||||
|
||||
**Lowering time complexity usually comes at the cost of increased space complexity, and vice versa**. The approach of sacrificing memory space to improve algorithm speed is known as "space-time tradeoff"; the reverse is known as "time-space tradeoff".
|
||||
|
||||
The choice depends on which aspect we value more. In most cases, time is more precious than space, so "space-time tradeoff" is often the more common strategy. Of course, controlling space complexity is also very important when dealing with large volumes of data.
|
49
en/docs/chapter_computational_complexity/summary.md
Normal file
@ -0,0 +1,49 @@
|
||||
# Summary
|
||||
|
||||
### Key Review
|
||||
|
||||
**Algorithm Efficiency Assessment**
|
||||
|
||||
- Time efficiency and space efficiency are the two main criteria for assessing the merits of an algorithm.
|
||||
- We can assess algorithm efficiency through actual testing, but it's challenging to eliminate the influence of the test environment, and it consumes substantial computational resources.
|
||||
- Complexity analysis can overcome the disadvantages of actual testing. Its results are applicable across all operating platforms and can reveal the efficiency of algorithms at different data scales.
|
||||
|
||||
**Time Complexity**
|
||||
|
||||
- Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.
|
||||
- Worst-case time complexity is denoted using big O notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations $T(n)$ as $n$ approaches infinity.
|
||||
- Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.
|
||||
- Common time complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$, among others.
|
||||
- The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.
|
||||
- Average time complexity reflects the efficiency of an algorithm under random data inputs, closely resembling the algorithm's performance in actual applications. Calculating average time complexity requires accounting for the distribution of input data and the subsequent mathematical expectation.
|
||||
|
||||
**Space Complexity**
|
||||
|
||||
- Space complexity, similar to time complexity, measures the trend of memory space occupied by an algorithm with the increase in data volume.
|
||||
- The relevant memory space used during the algorithm's execution can be divided into input space, temporary space, and output space. Generally, input space is not included in space complexity calculations. Temporary space can be divided into temporary data, stack frame space, and instruction space, where stack frame space usually affects space complexity only in recursive functions.
|
||||
- We usually focus only on the worst-case space complexity, which means calculating the space complexity of the algorithm under the worst input data and at the worst moment of operation.
|
||||
- Common space complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n^2)$, and $O(2^n)$, among others.
|
||||
|
||||
### Q & A
|
||||
|
||||
**Q**: Is the space complexity of tail recursion $O(1)$?
|
||||
|
||||
Theoretically, the space complexity of a tail-recursive function can be optimized to $O(1)$. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of $O(n)$.
|
||||
|
||||
**Q**: What is the difference between the terms "function" and "method"?
|
||||
|
||||
A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.
|
||||
|
||||
Here are some examples from common programming languages:
|
||||
|
||||
- C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages.
|
||||
- Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables.
|
||||
- C++ and Python support both procedural programming (functions) and object-oriented programming (methods).
|
||||
|
||||
**Q**: Does the "Common Types of Space Complexity" figure reflect the absolute size of occupied space?
|
||||
|
||||
No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space.
|
||||
|
||||
If you take $n = 8$, you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range.
|
||||
|
||||
In practice, since we usually don't know the "constant term" complexity of each method, it's generally not possible to choose the best solution for $n = 8$ based solely on complexity. However, for $n = 8^5$, it's much easier to choose, as the growth trend becomes dominant.
|
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 12 KiB |