Bug fixes and improvements (#1472)

* preorder, inorder, postorder -> pre-order, in-order, post-order

* Bug fixes

* Bug fixes

* Update what_is_dsa.md

* Sync zh and zh-hant versions

* Sync zh and zh-hant versions.

* Update performance_evaluation.md and time_complexity.md

* Add @khoaxuantu to the landing page.

* Sync zh and zh-hant versions

* Add @ khoaxuantu to the landing page of zh-hant and en versions.

* Sync zh and zh-hant versions.

* Small improvements

* @issue :  #1450 (#1453)

Fix writing "obsecure" to "obscure"

Co-authored-by: Gaya <kheliligaya@gmail.com>

* Update the definition of "adaptive sorting".

* Update n_queens_problem.md

* Sync zh, zh-hant, and en versions.

---------

Co-authored-by: Gaya-Khelili <50716339+Gaya-Khelili@users.noreply.github.com>
Co-authored-by: Gaya <kheliligaya@gmail.com>
This commit is contained in:
Yudong Jin
2024-07-30 16:56:59 +08:00
committed by GitHub
parent 89a911583d
commit c9041c5c5e
34 changed files with 79 additions and 55 deletions

View File

@ -24,8 +24,7 @@ The code is shown as follows:
Bucket sort is suitable for handling very large data sets. For example, if the input data includes 1 million elements, and system memory limitations prevent loading all the data at once, you can divide the data into 1,000 buckets and sort each bucket separately before merging the results.
- **Time complexity is $O(n + k)$**: Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is $n/k$. Assuming sorting a single bucket takes $O(n/k \log(n/k))$ time, sorting all buckets takes $O(n \log(n/k))$ time. **When the number of buckets $k$ is relatively large, the time complexity tends towards $O(n)$**. Merging the results requires traversing all buckets and elements, taking $O(n + k)$ time.
- **Adaptive sorting**: In the worst case, all data is distributed into a single bucket, and sorting that bucket takes $O(n^2)$ time.
- **Time complexity is $O(n + k)$**: Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is $n/k$. Assuming sorting a single bucket takes $O(n/k \log(n/k))$ time, sorting all buckets takes $O(n \log(n/k))$ time. **When the number of buckets $k$ is relatively large, the time complexity tends towards $O(n)$**. Merging the results requires traversing all buckets and elements, taking $O(n + k)$ time. In the worst case, all data is distributed into a single bucket, and sorting that bucket takes $O(n^2)$ time.
- **Space complexity is $O(n + k)$, non-in-place sorting**: It requires additional space for $k$ buckets and a total of $n$ elements.
- Whether bucket sort is stable depends on whether the algorithm used to sort elements within the buckets is stable.

View File

@ -61,7 +61,7 @@ The overall process of quick sort is shown in the figure below.
## Algorithm features
- **Time complexity of $O(n \log n)$, adaptive sorting**: In average cases, the recursive levels of pivot partitioning are $\log n$, and the total number of loops per level is $n$, using $O(n \log n)$ time overall. In the worst case, each round of pivot partitioning divides an array of length $n$ into two sub-arrays of lengths $0$ and $n - 1$, reaching $n$ recursive levels, and using $O(n^2)$ time overall.
- **Time complexity of $O(n \log n)$, non-adaptive sorting**: In average cases, the recursive levels of pivot partitioning are $\log n$, and the total number of loops per level is $n$, using $O(n \log n)$ time overall. In the worst case, each round of pivot partitioning divides an array of length $n$ into two sub-arrays of lengths $0$ and $n - 1$, reaching $n$ recursive levels, and using $O(n^2)$ time overall.
- **Space complexity of $O(n)$, in-place sorting**: In completely reversed input arrays, reaching the worst recursion depth of $n$, using $O(n)$ stack frame space. The sorting operation is performed on the original array without the aid of additional arrays.
- **Non-stable sorting**: In the final step of pivot partitioning, the pivot may be swapped to the right of equal elements.

View File

@ -35,14 +35,12 @@ Stable sorting is a necessary condition for multi-level sorting scenarios. Suppo
('E', 23)
```
**Adaptability**: <u>Adaptive sorting</u> has a time complexity that depends on the input data, i.e., the best time complexity, worst time complexity, and average time complexity are not exactly equal.
Adaptability needs to be assessed according to the specific situation. If the worst time complexity is worse than the average, it suggests that the performance of the sorting algorithm might deteriorate under certain data, hence it is seen as a negative attribute; whereas, if the best time complexity is better than the average, it is considered a positive attribute.
**Adaptability**: <u>Adaptive sorting</u> leverages existing order information within the input data to reduce computational effort, achieving more optimal time efficiency. The best-case time complexity of adaptive sorting algorithms is typically better than their average-case time complexity.
**Comparison-based**: <u>Comparison-based sorting</u> relies on comparison operators ($<$, $=$, $>$) to determine the relative order of elements and thus sort the entire array, with the theoretical optimal time complexity being $O(n \log n)$. Meanwhile, <u>non-comparison sorting</u> does not use comparison operators and can achieve a time complexity of $O(n)$, but its versatility is relatively poor.
## Ideal sorting algorithm
**Fast execution, in-place, stable, positively adaptive, and versatile**. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.
**Fast execution, in-place, stable, adaptive, and versatile**. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.
Next, we will learn about various sorting algorithms together and analyze the advantages and disadvantages of each based on the above evaluation dimensions.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

@ -9,7 +9,7 @@
- Bucket sort consists of three steps: data bucketing, sorting within buckets, and merging results. It also embodies the divide-and-conquer strategy, suitable for very large datasets. The key to bucket sort is the even distribution of data.
- Counting sort is a special case of bucket sort, which sorts by counting the occurrences of each data point. Counting sort is suitable for large datasets with a limited range of data and requires that data can be converted to positive integers.
- Radix sort sorts data by sorting digit by digit, requiring data to be represented as fixed-length numbers.
- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and positive adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
- The figure below compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.
![Sorting Algorithm Comparison](summary.assets/sorting_algorithms_comparison.png)