translation: English Translation of the chapter of preface(part), introduction and complexity analysis(part) (#994)
* Translate 1.0.0b6 release with the machine learning translator. * Update Dockerfile A few translation improvements. * Fix a badge logo. * Fix EN translation of chapter_appendix/terminology.md (#913) * Update README.md * Update README.md * translation: Refined the automated translation of README (#932) * refined the automated translation of README * Update index.md * Update mkdocs-en.yml --------- Co-authored-by: Yudong Jin <krahets@163.com> * translate: Embellish chapter_computational_complexity/index.md (#940) * translation: Update chapter_computational_complexity/performance_evaluation.md (#943) * Update performance_evaluation.md * Update performance_evaluation.md * Update performance_evaluation.md change 'methods' to 'approaches' on line 15 * Update performance_evaluation.md on line 21, change the sentence to 'the results could be the opposite on another computer with different specifications.' * Update performance_evaluation.md delete two short sentence on line 5 and 6 * Update performance_evaluation.md change `unavoidable` to `inevitable` on line 48 * Update performance_evaluation.md small changes on line 23 * translation: Update terminology and improve readability in preface summary (#954) * Update terminology and improve readability in preface summary This commit made a few adjustments in the 'summary.md' file for clearer and more accessible language. "Brushing tool library" was replaced with "Coding Toolkit" to better reflect common terminology. Also, advice for beginners in algorithm learning journey was reformulated to imply a more positive approach avoiding detours and common pitfalls. The section related to the discussion forum was rewritten to sound more inviting to readers. * Format * Optimize the translation of chapter_introduction/algorithms_are_everywhere. * Add .gitignore to Java subfolder. * Update the button assets. * Fix the callout * translation: chapter_computational_complexity/summary to en (#953) * translate chapter_computational_complexity/summary * minor format * Update summary.md with comment * Update summary.md * Update summary.md * translation: chapter_introduction/what_is_dsa.md (#962) * Optimize translation of what_is_dsa.md * Update * translation: chapter_introduction/summary.md (#963) * Translate chapter_introduction/summary.md * Update * translation: Update README.md (#964) * Update en translation of README.md * Update README.md * translation: update space_complexity.md (#970) * update space_complexity.md * the rest of translation piece * Update space_complexity.md --------- Co-authored-by: ThomasQiu <thomas.qiu@mnfgroup.limited> Co-authored-by: Yudong Jin <krahets@163.com> * translation: Update chapter_introduction/index.md (#971) * Update index.md sorry, first time doing this... now this is the final change. changes: title of the chapter is shorter. refined the abstract. * Update index.md --------- Co-authored-by: Yudong Jin <krahets@163.com> * translation: Update chapter_data_structure/classification_of_data_structure.md (#980) * update classification_of_data_structure.md * Update classification_of_data_structure.md --------- Co-authored-by: Yudong Jin <krahets@163.com> * translation: Update chapter_introduction/algorithms_are_everywhere.md (#972) * Update algorithms_are_everywhere.md changed or refined parts of the words and sentences including tips. Some of them I didnt change that much because im worried that it might not meet the requirement of accuracy. some other ones i changed a lot to make it sound better, but also kind of following the same wording as the CN version * Update algorithms_are_everywhere.md re-edited the dictionary part from Piyin to just normal Eng dictionary. again thank you very much hpstory for you suggestion. * Update algorithms_are_everywhere.md --------- Co-authored-by: Yudong Jin <krahets@163.com> * Prepare merging into main branch. * Update buttons * Update Dockerfile * Update index.md * Update index.md * Update README * Fix index.md * Fix mkdocs-en.yml --------- Co-authored-by: Yuelin Xin <sc20yx2@leeds.ac.uk> Co-authored-by: Phoenix Xie <phoenixx0415@gmail.com> Co-authored-by: Sizhuo Long <longsizhuo@gmail.com> Co-authored-by: Spark <qizhang94@outlook.com> Co-authored-by: Thomas <thomasqiu7@gmail.com> Co-authored-by: ThomasQiu <thomas.qiu@mnfgroup.limited> Co-authored-by: K3v123 <123932560+K3v123@users.noreply.github.com> Co-authored-by: Jin <36914748+yanedie@users.noreply.github.com>
13
docs-en/chapter_computational_complexity/index.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Complexity Analysis
|
||||
|
||||
<div class="center-table" markdown>
|
||||
|
||||

|
||||
|
||||
</div>
|
||||
|
||||
!!! abstract
|
||||
|
||||
Complexity analysis is like a space-time guide in the vast universe of algorithms.
|
||||
|
||||
It leads us to explore deeply in the dimensions of time and space, in search of more elegant solutions.
|
||||
|
After Width: | Height: | Size: 13 KiB |
|
After Width: | Height: | Size: 16 KiB |
|
After Width: | Height: | Size: 24 KiB |
|
After Width: | Height: | Size: 26 KiB |
|
After Width: | Height: | Size: 15 KiB |
|
After Width: | Height: | Size: 28 KiB |
@ -0,0 +1,194 @@
|
||||
# Iteration vs. Recursion
|
||||
|
||||
In data structures and algorithms, it is common to repeat a task, which is closely related to the complexity of the algorithm. There are two basic program structures that we usually use to repeat a task: iteration and recursion.
|
||||
|
||||
## Iteration
|
||||
|
||||
An "iteration iteration" is a control structure that repeats a task. In iteration, a program repeats the execution of a piece of code until the condition is no longer satisfied.
|
||||
|
||||
### For Loops
|
||||
|
||||
`for` loops are one of the most common forms of iteration, **suitable when the number of iterations is known in advance**.
|
||||
|
||||
The following function implements the summation $1 + 2 + \dots + n$ based on a `for` loop, and the result is recorded using the variable `res`. Note that `range(a, b)` in Python corresponds to a "left-closed-right-open" interval, which is traversed in the range $a, a + 1, \dots, b-1$.
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{for_loop}
|
||||
```
|
||||
|
||||
The figure below shows the flow block diagram of this summation function.
|
||||
|
||||

|
||||
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or a "linear relationship". In fact, **time complexity describes this "linear relationship"**. This is described in more detail in the next section.
|
||||
|
||||
### While Loop
|
||||
|
||||
Similar to a `for` loop, a `while` loop is a way to implement iteration. In a `while` loop, the program first checks the condition at each turn, and if the condition is true, it continues, otherwise it ends the loop.
|
||||
|
||||
Below, we use a `while` loop to realize the summation $1 + 2 + \dots + n$ .
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{while_loop}
|
||||
```
|
||||
|
||||
In `while` loops, since the steps of initializing and updating condition variables are independent of the loop structure, **it has more degrees of freedom than `for` loops**.
|
||||
|
||||
For example, in the following code, the condition variable $i$ is updated twice per round, which is not convenient to implement with a `for` loop.
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{while_loop_ii}
|
||||
```
|
||||
|
||||
Overall, **`for` loops have more compact code and `while` loops are more flexible**, and both can implement iteration structures. The choice of which one to use should be based on the needs of the particular problem.
|
||||
|
||||
### Nested Loops
|
||||
|
||||
We can nest one loop structure inside another, using the `for` loop as an example:
|
||||
|
||||
```src
|
||||
[file]{iteration}-[class]{}-[func]{nested_for_loop}
|
||||
```
|
||||
|
||||
The figure below gives the block diagram of the flow of this nested loop.
|
||||
|
||||

|
||||
|
||||
In this case, the number of operations of the function is proportional to $n^2$, or the algorithm's running time is "squared" to the size of the input data $n$.
|
||||
|
||||
We can continue to add nested loops, and each nest is a "dimension up", which will increase the time complexity to "cubic relations", "quadratic relations", and so on.
|
||||
|
||||
## Recursion
|
||||
|
||||
"Recursion recursion is an algorithmic strategy to solve a problem by calling the function itself. It consists of two main phases.
|
||||
|
||||
1. **recursive**: the program calls itself deeper and deeper, usually passing smaller or simpler arguments, until a "termination condition" is reached.
|
||||
2. **Recursion**: After the "termination condition" is triggered, the program returns from the deepest level of the recursion function, level by level, aggregating the results of each level.
|
||||
|
||||
And from an implementation point of view, recursion code contains three main elements.
|
||||
|
||||
1. **Termination condition**: used to decide when to switch from "recursive" to "inductive".
|
||||
2. **Recursion call**: corresponds to "recursion", where the function calls itself, usually with smaller or more simplified input parameters.
|
||||
3. **return result**: corresponds to "return", returning the result of the current recursion level to the previous one.
|
||||
|
||||
Observe the following code, we only need to call the function `recur(n)` , and the calculation of $1 + 2 + \dots + n$ is done:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{recur}
|
||||
```
|
||||
|
||||
The figure below shows the recursion of the function.
|
||||
|
||||

|
||||
|
||||
Although iteration and recursion can yield the same results from a computational point of view, **they represent two completely different paradigms for thinking about and solving problems**.
|
||||
|
||||
- **Iteration**: solving problems "from the bottom up". Start with the most basic steps and repeat or add to them until the task is completed.
|
||||
- **Recursion**: solving problems "from the top down". The original problem is broken down into smaller subproblems that have the same form as the original problem. Next, the subproblem continues to be broken down into smaller subproblems until it stops at the base case (the solution to the base case is known).
|
||||
|
||||
As an example of the above summation function, set the problem $f(n) = 1 + 2 + \dots + n$ .
|
||||
|
||||
- **Iteration**: the summation process is simulated in a loop, iterating from $1$ to $n$ and executing the summation operation in each round to find $f(n)$.
|
||||
- **Recursion**: decompose the problem into subproblems $f(n) = n + f(n-1)$ and keep (recursively) decomposing until the base case $f(1) = 1$ terminates.
|
||||
|
||||
### Call The Stack
|
||||
|
||||
Each time a recursion function calls itself, the system allocates memory for the newly opened function to store local variables, call addresses, other information, and so on. This results in two things.
|
||||
|
||||
- The context data for a function is stored in an area of memory called "stack frame space" and is not freed until the function returns. As a result, **recursion is usually more memory-intensive than iteration**.
|
||||
- Recursion calls to functions incur additional overhead. **Therefore recursion is usually less time efficient than loops**.
|
||||
|
||||
As shown in the figure below, before the termination condition is triggered, there are $n$ unreturned recursion functions at the same time, **with a recursion depth of $n$** .
|
||||
|
||||

|
||||
|
||||
In practice, the depth of recursion allowed by a programming language is usually limited, and too deep a recursion may result in a stack overflow error.
|
||||
|
||||
### Tail Recursion
|
||||
|
||||
Interestingly, **if a function makes a recursion call only at the last step before returning**, the function can be optimized by the compiler or interpreter to be comparable to iteration in terms of space efficiency. This situation is called "tail recursion tail recursion".
|
||||
|
||||
- **Ordinary recursion**: when a function returns to a function at a higher level, it needs to continue executing the code, so the system needs to save the context of the previous call.
|
||||
- **tail recursion**: the recursion call is the last operation before the function returns, which means that the function does not need to continue with other operations after returning to the previous level, so the system does not need to save the context of the previous function.
|
||||
|
||||
In the case of calculating $1 + 2 + \dots + n$, for example, we can implement tail recursion by setting the result variable `res` as a function parameter.
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{tail_recur}
|
||||
```
|
||||
|
||||
The execution of tail recursion is shown in the figure below. Comparing normal recursion and tail recursion, the execution point of the summation operation is different.
|
||||
|
||||
- **Ordinary recursion**: the summing operation is performed during the "return" process, and the summing operation is performed again after returning from each level.
|
||||
- **Tail recursion**: the summing operation is performed in a "recursion" process, the "recursion" process simply returns in levels.
|
||||
|
||||

|
||||
|
||||
!!! tip
|
||||
|
||||
Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if a function is tail recursive, you may still encounter stack overflow problems.
|
||||
|
||||
### Recursion Tree
|
||||
|
||||
When dealing with algorithmic problems related to divide and conquer, recursion is often more intuitive and easier to read than iteration. Take the Fibonacci sequence as an example.
|
||||
|
||||
!!! question
|
||||
|
||||
Given a Fibonacci series $0, 1, 1, 2, 3, 5, 8, 13, \dots$ , find the $n$th number of the series.
|
||||
|
||||
Let the $n$th number of the Fibonacci series be $f(n)$ , which leads to two easy conclusions.
|
||||
|
||||
- The first two numbers of the series are $f(1) = 0$ and $f(2) = 1$.
|
||||
- Each number in the series is the sum of the previous two numbers, i.e. $f(n) = f(n - 1) + f(n - 2)$ .
|
||||
|
||||
Recursion code can be written by making recursion calls according to the recursion relationship, using the first two numbers as termination conditions. Call `fib(n)` to get the $n$th number of the Fibonacci series.
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{fib}
|
||||
```
|
||||
|
||||
Looking at the above code, we have recursively called two functions within a function, **this means that from one call, two call branches are created**. As shown in the figure below, this recursion will result in a recursion tree with the number of levels $n$.
|
||||
|
||||

|
||||
|
||||
Essentially, recursion embodies the paradigm of "breaking down a problem into smaller sub-problems", and this divide and conquer strategy is essential.
|
||||
|
||||
- From an algorithmic point of view, many important algorithmic strategies such as searching, sorting algorithm, backtracking, divide and conquer, dynamic programming, etc. directly or indirectly apply this way of thinking.
|
||||
- From a data structure point of view, recursion is naturally suited to problems related to linked lists, trees and graphs because they are well suited to be analyzed with the idea of partitioning.
|
||||
|
||||
## Compare The Two
|
||||
|
||||
To summarize the above, as shown in the table below, iteration and recursion differ in implementation, performance and applicability.
|
||||
|
||||
<p align="center"> Table <id> Comparison of iteration and recursion features </p>
|
||||
|
||||
| | iteration | recursion |
|
||||
| ------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| implementation | circular structure | function call itself |
|
||||
| time-efficient | typically efficient, no function call overhead | overhead on every function call |
|
||||
| Memory Usage | Usually uses a fixed size of memory space | Cumulative function calls may use a lot of stack frame space |
|
||||
| Applicable Problems | For simple cyclic tasks, code is intuitive and readable | For sub-problem decomposition, such as trees, graphs, divide and conquer, backtracking, etc., the code structure is concise and clear |
|
||||
|
||||
!!! tip
|
||||
|
||||
If you find the following solutions difficult to understand, you can review them after reading the "Stack" chapter.
|
||||
|
||||
So what is the intrinsic connection between iteration and recursion? In the case of the recursive function described above, the summing operation takes place in the "return" phase of the recursion. This means that the function that is initially called is actually the last to complete its summing operation, **This mechanism works in the same way as the stack's "first in, last out" principle**.
|
||||
|
||||
In fact, recursion terms like "call stack" and "stack frame space" already imply a close relationship between recursion and the stack.
|
||||
|
||||
1. **Recursive**: When a function is called, the system allocates a new stack frame on the "call stack" for the function, which is used to store the function's local variables, parameters, return address, and other data.
|
||||
2. **Return to**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack", restoring the function's previous execution environment.
|
||||
|
||||
Thus, **we can use an explicit stack to model the behavior of the call stack**, thus transforming recursion into an iteration form:
|
||||
|
||||
```src
|
||||
[file]{recursion}-[class]{}-[func]{for_loop_recur}
|
||||
```
|
||||
|
||||
Observing the code above, it becomes more complex when recursion is converted to iteration. Although iteration and recursion can be converted to each other in many cases, it is not always worth doing so for two reasons.
|
||||
|
||||
- The transformed code may be more difficult to understand and less readable.
|
||||
- For some complex problems, simulating the behavior of the system call stack can be very difficult.
|
||||
|
||||
In short, **the choice of iteration or recursion depends on the nature of the particular problem**. In programming practice, it is crucial to weigh the advantages and disadvantages of both and choose the appropriate method based on the context.
|
||||
@ -0,0 +1,49 @@
|
||||
# Evaluation Of Algorithm Efficiency
|
||||
|
||||
In algorithm design, we aim to achieve two goals in succession:
|
||||
|
||||
1. **Finding a Solution to the Problem**: The algorithm needs to reliably find the correct solution within the specified input range.
|
||||
2. **Seeking the Optimal Solution**: There may be multiple ways to solve the same problem, and our goal is to find the most efficient algorithm possible.
|
||||
|
||||
In other words, once the ability to solve the problem is established, the efficiency of the algorithm emerges as the main benchmark for assessing its quality, which includes the following two aspects.
|
||||
|
||||
- **Time Efficiency**: The speed at which an algorithm runs.
|
||||
- **Space Efficiency**: The amount of memory space the algorithm consumes.
|
||||
|
||||
In short, our goal is to design data structures and algorithms that are both "fast and economical". Effectively evaluating algorithm efficiency is crucial, as it allows for the comparison of different algorithms and guides the design and optimization process.
|
||||
|
||||
There are mainly two approaches for assessing efficiency: practical testing and theoretical estimation.
|
||||
|
||||
## Practical Testing
|
||||
|
||||
Let's consider a scenario where we have two algorithms, `A` and `B`, both capable of solving the same problem. To compare their efficiency, the most direct method is to use a computer to run both algorithms while monitoring and recording their execution time and memory usage. This approach provides a realistic assessment of their performance, but it also has significant limitations.
|
||||
|
||||
On one hand, it's challenging to eliminate the interference of the test environment. Hardware configurations can significantly affect the performance of algorithms. For instance, on one computer, Algorithm `A` might run faster than Algorithm `B`, but the results could be the opposite on another computer with different specifications. This means we would need to conduct tests on a variety of machines and calculate an average efficiency, which is impractical.
|
||||
|
||||
Furthermore, conducting comprehensive tests is resource-intensive. The efficiency of algorithms can vary with different volumes of input data. For example, with smaller data sets, Algorithm A might run faster than Algorithm B; however, this could change with larger data sets. Therefore, to reach a convincing conclusion, it's necessary to test a range of data sizes, which requires excessive computational resources.
|
||||
|
||||
## Theoretical Estimation
|
||||
|
||||
Given the significant limitations of practical testing, we can consider assessing algorithm efficiency solely through calculations. This method of estimation is known as 'asymptotic complexity analysis,' often simply referred to as 'complexity analysis.
|
||||
|
||||
Complexity analysis illustrates the relationship between the time (and space) resources required by an algorithm and the size of its input data. **It describes the growing trend in the time and space required for the execution of an algorithm as the size of the input data increases**. This definition might sound a bit complex, so let's break it down into three key points for easier understanding.
|
||||
|
||||
- In complexity analysis, 'time and space' directly relate to 'time complexity' and 'space complexity,' respectively.
|
||||
- The statement "as the size of the input data increases" highlights that complexity analysis examines the interplay between the size of the input data and the algorithm's efficiency.
|
||||
- Lastly, the phrase "the growing trend in time and space required" emphasizes that the focus of complexity analysis is not on the specific values of running time or space occupied, but on the rate at which these requirements increase with larger input sizes.
|
||||
|
||||
**Complexity analysis overcomes the drawbacks of practical testing methods in two key ways:**.
|
||||
|
||||
- It is independent of the testing environment, meaning the analysis results are applicable across all operating platforms.
|
||||
- It effectively demonstrates the efficiency of algorithms with varying data volumes, particularly highlighting performance in large-scale data scenarios.
|
||||
|
||||
!!! tip
|
||||
|
||||
If you're still finding the concept of complexity confusing, don't worry. We will cover it in more detail in the subsequent chapters.
|
||||
|
||||
Complexity analysis provides us with a 'ruler' for evaluating the efficiency of algorithms, enabling us to measure the time and space resources required to execute a given algorithm and to compare the efficiency of different algorithms.
|
||||
|
||||
Complexity is a mathematical concept that might seem abstract and somewhat challenging for beginners. From this perspective, introducing complexity analysis at the very beginning may not be the most suitable approach. However, when discussing the characteristics of a particular data structure or algorithm, analyzing its operational speed and space usage is often inevitable.
|
||||
|
||||
Therefore, it is recommended that before diving deeply into data structures and algorithms, **one should first gain a basic understanding of complexity analysis. This foundational knowledge will facilitate the complexity analysis of simple algorithms.**
|
||||
|
||||
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 19 KiB |
|
After Width: | Height: | Size: 25 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
After Width: | Height: | Size: 17 KiB |
780
docs-en/chapter_computational_complexity/space_complexity.md
Normal file
@ -0,0 +1,780 @@
|
||||
# Space Complexity
|
||||
|
||||
The space complexity is used to measure the growth trend of memory consumption as the scale of data increases for an algorithm solution. This concept is analogous to time complexity by replacing "runtime" with "memory space".
|
||||
|
||||
## Algorithmic Correlation Space
|
||||
|
||||
The memory space used by algorithms during its execution include the following types.
|
||||
|
||||
- **Input Space**: Used to store the input data for the algorithm.
|
||||
- **Temporary Space**: Used to store variables, objects, function contexts, and other data of the algorithm during runtime.
|
||||
- **Output Space**: Used to store the output data of the algorithm.
|
||||
|
||||
In general, the "Input Space" is excluded from the statistics of space complexity.
|
||||
|
||||
The **Temporary Space** can be further divided into three parts.
|
||||
|
||||
- **Temporary Data**: Used to store various constants, variables, objects, etc., during the the algorithm's execution.
|
||||
- **Stack Frame Space**: Used to hold the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is freed when the function returns.
|
||||
- **Instruction Space**: Used to hold compiled program instructions, usually ignored in practical statistics.
|
||||
|
||||
When analyzing the space complexity of a piece of program, **three parts are usually taken into account: Temporary Data, Stack Frame Space and Output Data**.
|
||||
|
||||

|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
class Node:
|
||||
"""Classes""""
|
||||
def __init__(self, x: int):
|
||||
self.val: int = x # node value
|
||||
self.next: Node | None = None # reference to the next node
|
||||
|
||||
def function() -> int:
|
||||
""""Functions"""""
|
||||
# Perform certain operations...
|
||||
return 0
|
||||
|
||||
def algorithm(n) -> int: # input data
|
||||
A = 0 # temporary data (constant, usually in uppercase)
|
||||
b = 0 # temporary data (variable)
|
||||
node = Node(0) # temporary data (object)
|
||||
c = function() # Stack frame space (call function)
|
||||
return A + b + c # output data
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
/* Structures */
|
||||
struct Node {
|
||||
int val;
|
||||
Node *next;
|
||||
Node(int x) : val(x), next(nullptr) {}
|
||||
};
|
||||
|
||||
/* Functions */
|
||||
int func() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node* node = new Node(0); // temporary data (object)
|
||||
int c = func(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(int x) { val = x; }
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
final int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = new Node(0); // temporary data (object)
|
||||
int c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(int x) { val = x; }
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int Function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int Algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = new(0); // temporary data (object)
|
||||
int c = Function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
/* Structures */
|
||||
type node struct {
|
||||
val int
|
||||
next *node
|
||||
}
|
||||
|
||||
/* Create node structure */
|
||||
func newNode(val int) *node {
|
||||
return &node{val: val}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
func function() int {
|
||||
// Perform certain operations...
|
||||
return 0
|
||||
}
|
||||
|
||||
func algorithm(n int) int { // input data
|
||||
const a = 0 // temporary data (constant)
|
||||
b := 0 // temporary storage of data (variable)
|
||||
newNode(0) // temporary data (object)
|
||||
c := function() // stack frame space (call function)
|
||||
return a + b + c // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
var val: Int
|
||||
var next: Node?
|
||||
|
||||
init(x: Int) {
|
||||
val = x
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
func function() -> Int {
|
||||
// Perform certain operations...
|
||||
return 0
|
||||
}
|
||||
|
||||
func algorithm(n: Int) -> Int { // input data
|
||||
let a = 0 // temporary data (constant)
|
||||
var b = 0 // temporary data (variable)
|
||||
let node = Node(x: 0) // temporary data (object)
|
||||
let c = function() // stack frame space (call function)
|
||||
return a + b + c // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
val;
|
||||
next;
|
||||
constructor(val) {
|
||||
this.val = val === undefined ? 0 : val; // node value
|
||||
this.next = null; // reference to the next node
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
function constFunc() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
|
||||
function algorithm(n) { // input data
|
||||
const a = 0; // temporary data (constant)
|
||||
let b = 0; // temporary data (variable)
|
||||
const node = new Node(0); // temporary data (object)
|
||||
const c = constFunc(); // Stack frame space (calling function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
val: number;
|
||||
next: Node | null;
|
||||
constructor(val?: number) {
|
||||
this.val = val === undefined ? 0 : val; // node value
|
||||
this.next = null; // reference to the next node
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
function constFunc(): number {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
|
||||
function algorithm(n: number): number { // input data
|
||||
const a = 0; // temporary data (constant)
|
||||
let b = 0; // temporary data (variable)
|
||||
const node = new Node(0); // temporary data (object)
|
||||
const c = constFunc(); // Stack frame space (calling function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
/* Classes */
|
||||
class Node {
|
||||
int val;
|
||||
Node next;
|
||||
Node(this.val, [this.next]);
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
int function() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
Node node = Node(0); // temporary data (object)
|
||||
int c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
use std::rc::Rc;
|
||||
use std::cell::RefCell;
|
||||
|
||||
/* Structures */
|
||||
struct Node {
|
||||
val: i32,
|
||||
next: Option<Rc<RefCell<Node>>>,
|
||||
}
|
||||
|
||||
/* Creating a Node structure */
|
||||
impl Node {
|
||||
fn new(val: i32) -> Self {
|
||||
Self { val: val, next: None }
|
||||
}
|
||||
}
|
||||
|
||||
/* Functions */
|
||||
fn function() -> i32 {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
fn algorithm(n: i32) -> i32 { // input data
|
||||
const a: i32 = 0; // temporary data (constant)
|
||||
let mut b = 0; // temporary data (variable)
|
||||
let node = Node::new(0); // temporary data (object)
|
||||
let c = function(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
/* Functions */
|
||||
int func() {
|
||||
// Perform certain operations...
|
||||
return 0;
|
||||
}
|
||||
|
||||
int algorithm(int n) { // input data
|
||||
const int a = 0; // temporary data (constant)
|
||||
int b = 0; // temporary data (variable)
|
||||
int c = func(); // stack frame space (call function)
|
||||
return a + b + c; // output data
|
||||
}
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
## Calculation Method
|
||||
|
||||
The calculation method for space complexity is pretty similar to time complexity, with the only difference being that the focus shifts from "operation count" to "space usage size".
|
||||
|
||||
On top of that, unlike time complexity, **we usually only focus on the worst-case space complexity**. This is because memory space is a hard requirement, and we have to make sure that there is enough memory space reserved for all possibilities incurred by input data.
|
||||
|
||||
Looking at the following code, the "worst" in worst-case space complexity has two layers of meaning.
|
||||
|
||||
1. **Based on the worst-case input data**: when $n < 10$, the space complexity is $O(1)$; however, when $n > 10$, the initialized array `nums` occupies $O(n)$ space; thus the worst-case space complexity is $O(n)$.
|
||||
2. **Based on the peak memory during algorithm execution**: for example, the program occupies $O(1)$ space until the last line is executed; when the array `nums` is initialized, the program occupies $O(n)$ space; thus the worst-case space complexity is $O(n)$.
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
def algorithm(n: int):
|
||||
a = 0 # O(1)
|
||||
b = [0] * 10000 # O(1)
|
||||
if n > 10:
|
||||
nums = [0] * n # O(n)
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
vector<int> b(10000); // O(1)
|
||||
if (n > 10)
|
||||
vector<int> nums(n); // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int[] b = new int[10000]; // O(1)
|
||||
if (n > 10)
|
||||
int[] nums = new int[n]; // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
void Algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int[] b = new int[10000]; // O(1)
|
||||
if (n > 10) {
|
||||
int[] nums = new int[n]; // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
func algorithm(n int) {
|
||||
a := 0 // O(1)
|
||||
b := make([]int, 10000) // O(1)
|
||||
var nums []int
|
||||
if n > 10 {
|
||||
nums := make([]int, n) // O(n)
|
||||
}
|
||||
fmt.Println(a, b, nums)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
func algorithm(n: Int) {
|
||||
let a = 0 // O(1)
|
||||
let b = Array(repeating: 0, count: 10000) // O(1)
|
||||
if n > 10 {
|
||||
let nums = Array(repeating: 0, count: n) // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
function algorithm(n) {
|
||||
const a = 0; // O(1)
|
||||
const b = new Array(10000); // O(1)
|
||||
if (n > 10) {
|
||||
const nums = new Array(n); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
function algorithm(n: number): void {
|
||||
const a = 0; // O(1)
|
||||
const b = new Array(10000); // O(1)
|
||||
if (n > 10) {
|
||||
const nums = new Array(n); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
List<int> b = List.filled(10000, 0); // O(1)
|
||||
if (n > 10) {
|
||||
List<int> nums = List.filled(n, 0); // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
fn algorithm(n: i32) {
|
||||
let a = 0; // O(1)
|
||||
let b = [0; 10000]; // O(1)
|
||||
if n > 10 {
|
||||
let nums = vec![0; n as usize]; // O(n)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
void algorithm(int n) {
|
||||
int a = 0; // O(1)
|
||||
int b[10000]; // O(1)
|
||||
if (n > 10)
|
||||
int nums[n] = {0}; // O(n)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
**In recursion functions, it is important to take into count the measurement of stack frame space**. For example in the following code:
|
||||
|
||||
- The function `loop()` calls $n$ times `function()` in a loop, and each round of `function()` returns and frees stack frame space, so the space complexity is still $O(1)$.
|
||||
- The recursion function `recur()` will have $n$ unreturned `recur()` during runtime, thus occupying $O(n)$ of stack frame space.
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python title=""
|
||||
def function() -> int:
|
||||
# Perform certain operations
|
||||
return 0
|
||||
|
||||
def loop(n: int):
|
||||
"""Loop O(1)"""""
|
||||
for _ in range(n):
|
||||
function()
|
||||
|
||||
def recur(n: int) -> int:
|
||||
"""Recursion O(n)"""""
|
||||
if n == 1: return
|
||||
return recur(n - 1)
|
||||
```
|
||||
|
||||
=== "C++"
|
||||
|
||||
```cpp title=""
|
||||
int func() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
func();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
|
||||
```java title=""
|
||||
int function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "C#"
|
||||
|
||||
```csharp title=""
|
||||
int Function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void Loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
Function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
int Recur(int n) {
|
||||
if (n == 1) return 1;
|
||||
return Recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Go"
|
||||
|
||||
```go title=""
|
||||
func function() int {
|
||||
// Perform certain operations
|
||||
return 0
|
||||
}
|
||||
|
||||
/* Cycle O(1) */
|
||||
func loop(n int) {
|
||||
for i := 0; i < n; i++ {
|
||||
function()
|
||||
}
|
||||
}
|
||||
|
||||
/* Recursion O(n) */
|
||||
func recur(n int) {
|
||||
if n == 1 {
|
||||
return
|
||||
}
|
||||
recur(n - 1)
|
||||
}
|
||||
```
|
||||
|
||||
=== "Swift"
|
||||
|
||||
```swift title=""
|
||||
@discardableResult
|
||||
func function() -> Int {
|
||||
// Perform certain operations
|
||||
return 0
|
||||
}
|
||||
|
||||
/* Cycle O(1) */
|
||||
func loop(n: Int) {
|
||||
for _ in 0 ..< n {
|
||||
function()
|
||||
}
|
||||
}
|
||||
|
||||
/* Recursion O(n) */
|
||||
func recur(n: Int) {
|
||||
if n == 1 {
|
||||
return
|
||||
}
|
||||
recur(n: n - 1)
|
||||
}
|
||||
```
|
||||
|
||||
=== "JS"
|
||||
|
||||
```javascript title=""
|
||||
function constFunc() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
function loop(n) {
|
||||
for (let i = 0; i < n; i++) {
|
||||
constFunc();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
function recur(n) {
|
||||
if (n === 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "TS"
|
||||
|
||||
```typescript title=""
|
||||
function constFunc(): number {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
function loop(n: number): void {
|
||||
for (let i = 0; i < n; i++) {
|
||||
constFunc();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
function recur(n: number): void {
|
||||
if (n === 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Dart"
|
||||
|
||||
```dart title=""
|
||||
int function() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Rust"
|
||||
|
||||
```rust title=""
|
||||
fn function() -> i32 {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
fn loop(n: i32) {
|
||||
for i in 0..n {
|
||||
function();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(n: i32) {
|
||||
if n == 1 {
|
||||
return;
|
||||
}
|
||||
recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "C"
|
||||
|
||||
```c title=""
|
||||
int func() {
|
||||
// Perform certain operations
|
||||
return 0;
|
||||
}
|
||||
/* Cycle O(1) */
|
||||
void loop(int n) {
|
||||
for (int i = 0; i < n; i++) {
|
||||
func();
|
||||
}
|
||||
}
|
||||
/* Recursion O(n) */
|
||||
void recur(int n) {
|
||||
if (n == 1) return;
|
||||
return recur(n - 1);
|
||||
}
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
|
||||
```zig title=""
|
||||
|
||||
```
|
||||
|
||||
## Common Types
|
||||
|
||||
Assuming the input data size is $n$, the figure illustrates common types of space complexity (ordered from low to high).
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
\text{constant order} < \text{logarithmic order} < \text{linear order} < \text{square order} < \text{exponential order}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||

|
||||
|
||||
### Constant Order $O(1)$
|
||||
|
||||
Constant order is common for constants, variables, and objects whose quantity is unrelated to the size of the input data $n$.
|
||||
|
||||
It is important to note that memory occupied by initializing a variable or calling a function in a loop is released once the next iteration begins. Therefore, there is no accumulation of occupied space and the space complexity remains $O(1)$ :
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{constant}
|
||||
```
|
||||
|
||||
### Linear Order $O(N)$
|
||||
|
||||
Linear order is commonly found in arrays, linked lists, stacks, queues, and similar structures where the number of elements is proportional to $n$:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{linear}
|
||||
```
|
||||
|
||||
As shown in the figure below, the depth of recursion for this function is $n$, which means that there are $n$ unreturned `linear_recur()` functions at the same time, using $O(n)$ size stack frame space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{linear_recur}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Quadratic Order $O(N^2)$
|
||||
|
||||
Quadratic order is common in matrices and graphs, where the number of elements is in a square relationship with $n$:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic}
|
||||
```
|
||||
|
||||
As shown in the figure below, the recursion depth of this function is $n$, and an array is initialized in each recursion function with lengths $n$, $n-1$, $\dots$, $2$, $1$, and an average length of $n / 2$, thus occupying $O(n^2)$ space overall:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{quadratic_recur}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Exponential Order $O(2^N)$
|
||||
|
||||
Exponential order is common in binary trees. Looking at the figure below, a "full binary tree" of degree $n$ has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
|
||||
```src
|
||||
[file]{space_complexity}-[class]{}-[func]{build_tree}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Logarithmic Order $O(\Log N)$
|
||||
|
||||
Logarithmic order is commonly used in divide and conquer algorithms. For example, in a merge sort, given an array of length $n$ as the input, each round of recursion divides the array in half from its midpoint to form a recursion tree of height $\log n$, using $O(\log n)$ stack frame space.
|
||||
|
||||
Another example is to convert a number into a string. Given a positive integer $n$ with a digit count of $\log_{10} n + 1$, the corresponding string length is $\log_{10} n + 1$. Therefore, the space complexity is $O(\log_{10} n + 1) = O(\log n)$.
|
||||
|
||||
## Weighing Time And Space
|
||||
|
||||
Ideally, we would like to optimize both the time complexity and the space complexity of an algorithm. However, in reality, simultaneously optimizing time and space complexity is often challenging.
|
||||
|
||||
**Reducing time complexity usually comes at the expense of increasing space complexity, and vice versa**. The approach of sacrificing memory space to improve algorithm speed is known as "trading space for time", while the opposite is called "trading time for space".
|
||||
|
||||
The choice between these approaches depends on which aspect we prioritize. In most cases, time is more valuable than space, so "trading space for time" is usually the more common strategy. Of course, in situations with large data volumes, controlling space complexity is also crucial.
|
||||
49
docs-en/chapter_computational_complexity/summary.md
Normal file
@ -0,0 +1,49 @@
|
||||
# Summary
|
||||
|
||||
### Highlights
|
||||
|
||||
**Evaluation of Algorithm Efficiency**
|
||||
|
||||
- Time and space efficiency are the two leading evaluation indicators to measure an algorithm.
|
||||
- We can evaluate the efficiency of an algorithm through real-world testing. Still, it isn't easy to eliminate the side effects from the testing environment, and it consumes a lot of computational resources.
|
||||
- Complexity analysis overcomes the drawbacks of real-world testing. The analysis results can apply to all operating platforms and reveal the algorithm's efficiency under variant data scales.
|
||||
|
||||
**Time Complexity**
|
||||
|
||||
- Time complexity is used to measure the trend of algorithm running time as the data size grows., which can effectively evaluate the algorithm's efficiency. However, it may fail in some cases, such as when the input volume is small or the time complexities are similar, making it difficult to precisely compare the efficiency of algorithms.
|
||||
- The worst time complexity is denoted by big $O$ notation, which corresponds to the asymptotic upper bound of the function, reflecting the growth rate in the number of operations $T(n)$ as $n$ tends to positive infinity.
|
||||
- The estimation of time complexity involves two steps: first, counting the number of operations, and then determining the asymptotic upper bound.
|
||||
- Common time complexities, from lowest to highest, are $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$.
|
||||
- The time complexity of certain algorithms is not fixed and depends on the distribution of the input data. The time complexity can be categorized into worst-case, best-case, and average. The best-case time complexity is rarely used because the input data must meet strict conditions to achieve the best-case.
|
||||
- The average time complexity reflects the efficiency of an algorithm with random data inputs, which is closest to the performance of algorithms in real-world scenarios. Calculating the average time complexity requires statistical analysis of input data and a synthesized mathematical expectation.
|
||||
|
||||
**Space Complexity**
|
||||
|
||||
- Space complexity serves a similar purpose to time complexity and is used to measure the trend of space occupied by an algorithm as the data volume increases.
|
||||
- The memory space associated with the operation of an algorithm can be categorized into input space, temporary space, and output space. Normally, the input space is not considered when determining space complexity. The temporary space can be classified into instruction space, data space, and stack frame space, and the stack frame space usually only affects the space complexity for recursion functions.
|
||||
- We mainly focus on the worst-case space complexity, which refers to the measurement of an algorithm's space usage when given the worst-case input data and during the worst-case execution scenario.
|
||||
- Common space complexities are $O(1)$, $O(\log n)$, $O(n)$, $O(n^2)$ and $O(2^n)$ from lowest to highest.
|
||||
|
||||
### Q & A
|
||||
|
||||
!!! question "Is the space complexity of tail recursion $O(1)$?"
|
||||
|
||||
Theoretically, the space complexity of a tail recursion function can be optimized to $O(1)$. However, most programming languages (e.g., Java, Python, C++, Go, C#, etc.) do not support auto-optimization for tail recursion, so the space complexity is usually considered as $O(n)$.
|
||||
|
||||
!!! question "What is the difference between the terms function and method?"
|
||||
|
||||
A *function* can be executed independently, and all arguments are passed explicitly. A *method* is associated with an object and is implicitly passed to the object that calls it, allowing it to operate on the data contained within an instance of a class.
|
||||
|
||||
Let's illustrate with a few common programming languages.
|
||||
|
||||
- C is a procedural programming language without object-oriented concepts, so it has only functions. However, we can simulate object-oriented programming by creating structures (struct), and the functions associated with structures are equivalent to methods in other languages.
|
||||
- Java and C# are object-oriented programming languages, and blocks of code (methods) are typically part of a class. Static methods behave like a function because it is bound to the class and cannot access specific instance variables.
|
||||
- Both C++ and Python support both procedural programming (functions) and object-oriented programming (methods).
|
||||
|
||||
!!! question "Does the figure "Common Types of Space Complexity" reflect the absolute size of the occupied space?"
|
||||
|
||||
No, that figure shows the space complexity, which reflects the growth trend, not the absolute size of the space occupied.
|
||||
|
||||
For example, if you take $n = 8$ , the values of each curve do not align with the function because each curve contains a constant term used to compress the range of values to a visually comfortable range.
|
||||
|
||||
In practice, since we usually don't know each method's "constant term" complexity, it is generally impossible to choose the optimal solution for $n = 8$ based on complexity alone. But it's easier to choose for $n = 8^5$ as the growth trend is already dominant.
|
||||
|
After Width: | Height: | Size: 21 KiB |
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 20 KiB |
|
After Width: | Height: | Size: 21 KiB |
|
After Width: | Height: | Size: 23 KiB |
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 15 KiB |