mirror of
https://github.com/krahets/hello-algo.git
synced 2025-07-19 12:24:34 +08:00
build
This commit is contained in:
26
en/docs/chapter_computational_complexity/index.md
Normal file
26
en/docs/chapter_computational_complexity/index.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
comments: true
|
||||
icon: material/timer-sand
|
||||
---
|
||||
|
||||
# Chapter 2. Complexity Analysis
|
||||
|
||||
<div class="center-table" markdown>
|
||||
|
||||
{ class="cover-image" }
|
||||
|
||||
</div>
|
||||
|
||||
!!! abstract
|
||||
|
||||
Complexity analysis is like a space-time navigator in the vast universe of algorithms.
|
||||
|
||||
It guides us in exploring deeper within the the dimensions of time and space, seeking more elegant solutions.
|
||||
|
||||
## Chapter Contents
|
||||
|
||||
- [2.1 Algorithm Efficiency Assessment](https://www.hello-algo.com/en/chapter_computational_complexity/performance_evaluation/)
|
||||
- [2.2 Iteration and Recursion](https://www.hello-algo.com/en/chapter_computational_complexity/iteration_and_recursion/)
|
||||
- [2.3 Time Complexity](https://www.hello-algo.com/en/chapter_computational_complexity/time_complexity/)
|
||||
- [2.4 Space Complexity](https://www.hello-algo.com/en/chapter_computational_complexity/space_complexity/)
|
||||
- [2.5 Summary](https://www.hello-algo.com/en/chapter_computational_complexity/summary/)
|
1798
en/docs/chapter_computational_complexity/iteration_and_recursion.md
Normal file
1798
en/docs/chapter_computational_complexity/iteration_and_recursion.md
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,52 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# 2.1 Algorithm Efficiency Assessment
|
||||
|
||||
In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
1. **Finding a Solution to the Problem**: The algorithm should reliably find the correct solution within the stipulated range of inputs.
|
||||
2. **Seeking the Optimal Solution**: For the same problem, multiple solutions might exist, and we aim to find the most efficient algorithm possible.
|
||||
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.
|
||||
|
||||
- **Time Efficiency**: The speed at which an algorithm runs.
|
||||
- **Space Efficiency**: The size of the memory space occupied by an algorithm.
|
||||
|
||||
In short, **our goal is to design data structures and algorithms that are both fast and memory-efficient**. Effectively assessing algorithm efficiency is crucial because only then can we compare various algorithms and guide the process of algorithm design and optimization.
|
||||
|
||||
There are mainly two methods of efficiency assessment: actual testing and theoretical estimation.
|
||||
|
||||
## 2.1.1 Actual Testing
|
||||
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.
|
||||
|
||||
On one hand, **it's difficult to eliminate interference from the testing environment**. Hardware configurations can affect algorithm performance. For example, algorithm `A` might run faster than `B` on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.
|
||||
|
||||
On the other hand, **conducting a full test is very resource-intensive**. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.
|
||||
|
||||
## 2.1.2 Theoretical Estimation
|
||||
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."
|
||||
|
||||
Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. **It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases**. This definition might sound complex, but we can break it down into three key points to understand it better.
|
||||
|
||||
- "Time and space resources" correspond to "time complexity" and "space complexity," respectively.
|
||||
- "As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.
|
||||
|
||||
**Complexity analysis overcomes the disadvantages of actual testing methods**, reflected in the following aspects:
|
||||
|
||||
- It is independent of the testing environment and applicable to all operating platforms.
|
||||
- It can reflect algorithm efficiency under different data volumes, especially in the performance of algorithms with large data volumes.
|
||||
|
||||
!!! tip
|
||||
|
||||
If you're still confused about the concept of complexity, don't worry. We will introduce it in detail in subsequent chapters.
|
||||
|
||||
Complexity analysis provides us with a "ruler" to measure the time and space resources needed to execute an algorithm and compare the efficiency between different algorithms.
|
||||
|
||||
Complexity is a mathematical concept and may be abstract and challenging for beginners. From this perspective, complexity analysis might not be the best content to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.
|
||||
|
||||
In summary, it's recommended that you establish a preliminary understanding of complexity analysis before diving deep into data structures and algorithms, **so that you can carry out simple complexity analyses of algorithms**.
|
2077
en/docs/chapter_computational_complexity/space_complexity.md
Normal file
2077
en/docs/chapter_computational_complexity/space_complexity.md
Normal file
File diff suppressed because it is too large
Load Diff
53
en/docs/chapter_computational_complexity/summary.md
Normal file
53
en/docs/chapter_computational_complexity/summary.md
Normal file
@ -0,0 +1,53 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# 2.5 Summary
|
||||
|
||||
### 1. Key Review
|
||||
|
||||
**Algorithm Efficiency Assessment**
|
||||
|
||||
- Time efficiency and space efficiency are the two main criteria for assessing the merits of an algorithm.
|
||||
- We can assess algorithm efficiency through actual testing, but it's challenging to eliminate the influence of the test environment, and it consumes substantial computational resources.
|
||||
- Complexity analysis can overcome the disadvantages of actual testing. Its results are applicable across all operating platforms and can reveal the efficiency of algorithms at different data scales.
|
||||
|
||||
**Time Complexity**
|
||||
|
||||
- Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.
|
||||
- Worst-case time complexity is denoted using big O notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations $T(n)$ as $n$ approaches infinity.
|
||||
- Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.
|
||||
- Common time complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$, among others.
|
||||
- The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.
|
||||
- Average time complexity reflects the efficiency of an algorithm under random data inputs, closely resembling the algorithm's performance in actual applications. Calculating average time complexity requires accounting for the distribution of input data and the subsequent mathematical expectation.
|
||||
|
||||
**Space Complexity**
|
||||
|
||||
- Space complexity, similar to time complexity, measures the trend of memory space occupied by an algorithm with the increase in data volume.
|
||||
- The relevant memory space used during the algorithm's execution can be divided into input space, temporary space, and output space. Generally, input space is not included in space complexity calculations. Temporary space can be divided into temporary data, stack frame space, and instruction space, where stack frame space usually affects space complexity only in recursive functions.
|
||||
- We usually focus only on the worst-case space complexity, which means calculating the space complexity of the algorithm under the worst input data and at the worst moment of operation.
|
||||
- Common space complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n^2)$, and $O(2^n)$, among others.
|
||||
|
||||
### 2. Q & A
|
||||
|
||||
**Q**: Is the space complexity of tail recursion $O(1)$?
|
||||
|
||||
Theoretically, the space complexity of a tail-recursive function can be optimized to $O(1)$. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of $O(n)$.
|
||||
|
||||
**Q**: What is the difference between the terms "function" and "method"?
|
||||
|
||||
A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.
|
||||
|
||||
Here are some examples from common programming languages:
|
||||
|
||||
- C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages.
|
||||
- Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables.
|
||||
- C++ and Python support both procedural programming (functions) and object-oriented programming (methods).
|
||||
|
||||
**Q**: Does the "Common Types of Space Complexity" figure reflect the absolute size of occupied space?
|
||||
|
||||
No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space.
|
||||
|
||||
If you take $n = 8$, you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range.
|
||||
|
||||
In practice, since we usually don't know the "constant term" complexity of each method, it's generally not possible to choose the best solution for $n = 8$ based solely on complexity. However, for $n = 8^5$, it's much easier to choose, as the growth trend becomes dominant.
|
3426
en/docs/chapter_computational_complexity/time_complexity.md
Normal file
3426
en/docs/chapter_computational_complexity/time_complexity.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user