This commit is contained in:
krahets
2023-12-27 00:48:00 +08:00
parent e56cb78f28
commit 8d49c46234
18 changed files with 5405 additions and 37 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,22 @@
---
comments: true
icon: material/view-list-outline
---
# Chapter 4.   Arrays and Linked Lists
![Arrays and Linked Lists](../assets/covers/chapter_array_and_linkedlist.jpg){ class="cover-image" }
!!! abstract
The world of data structures is like a solid brick wall.
The bricks of an array are neatly arranged, each closely connected to the next. In contrast, the bricks of a linked list are scattered, with vines of connections freely weaving through the gaps between bricks.
## 本章内容
- [4.1   Array](https://www.hello-algo.com/chapter_array_and_linkedlist/array/)
- [4.2   Linked List](https://www.hello-algo.com/chapter_array_and_linkedlist/linked_list/)
- [4.3   List](https://www.hello-algo.com/chapter_array_and_linkedlist/list/)
- [4.4   Memory and Cache](https://www.hello-algo.com/chapter_array_and_linkedlist/ram_and_cache/)
- [4.5   Summary](https://www.hello-algo.com/chapter_array_and_linkedlist/summary/)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,83 @@
---
comments: true
---
# 4.4   Memory and Cache *
In the first two sections of this chapter, we explored arrays and linked lists, two fundamental and important data structures, representing "continuous storage" and "dispersed storage" respectively.
In fact, **the physical structure largely determines the efficiency of a program's use of memory and cache**, which in turn affects the overall performance of the algorithm.
## 4.4.1   Computer Storage Devices
There are three types of storage devices in computers: "hard disk," "random-access memory (RAM)," and "cache memory." The following table shows their different roles and performance characteristics in computer systems.
<p align="center"> Table 4-2 &nbsp; Computer Storage Devices </p>
<div class="center-table" markdown>
| | Hard Disk | Memory | Cache |
| ---------- | -------------------------------------------------------------- | ------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- |
| Usage | Long-term storage of data, including OS, programs, files, etc. | Temporary storage of currently running programs and data being processed | Stores frequently accessed data and instructions, reducing the number of CPU accesses to memory |
| Volatility | Data is not lost after power off | Data is lost after power off | Data is lost after power off |
| Capacity | Larger, TB level | Smaller, GB level | Very small, MB level |
| Speed | Slower, several hundred to thousands MB/s | Faster, several tens of GB/s | Very fast, several tens to hundreds of GB/s |
| Price | Cheaper, several cents to yuan / GB | More expensive, tens to hundreds of yuan / GB | Very expensive, priced with CPU |
</div>
We can imagine the computer storage system as a pyramid structure shown in the Figure 4-9 . The storage devices closer to the top of the pyramid are faster, have smaller capacity, and are more costly. This multi-level design is not accidental, but the result of careful consideration by computer scientists and engineers.
- **Hard disks are difficult to replace with memory**. Firstly, data in memory is lost after power off, making it unsuitable for long-term data storage; secondly, the cost of memory is dozens of times that of hard disks, making it difficult to popularize in the consumer market.
- **It is difficult for caches to have both large capacity and high speed**. As the capacity of L1, L2, L3 caches gradually increases, their physical size becomes larger, increasing the physical distance from the CPU core, leading to increased data transfer time and higher element access latency. Under current technology, a multi-level cache structure is the best balance between capacity, speed, and cost.
![Computer Storage System](ram_and_cache.assets/storage_pyramid.png){ class="animation-figure" }
<p align="center"> Figure 4-9 &nbsp; Computer Storage System </p>
!!! note
The storage hierarchy of computers reflects a delicate balance between speed, capacity, and cost. In fact, this kind of trade-off is common in all industrial fields, requiring us to find the best balance between different advantages and limitations.
Overall, **hard disks are used for long-term storage of large amounts of data, memory is used for temporary storage of data being processed during program execution, and cache is used to store frequently accessed data and instructions** to improve program execution efficiency. Together, they ensure the efficient operation of computer systems.
As shown in the Figure 4-10 , during program execution, data is read from the hard disk into memory for CPU computation. The cache can be considered a part of the CPU, **smartly loading data from memory** to provide fast data access to the CPU, significantly enhancing program execution efficiency and reducing reliance on slower memory.
![Data Flow Between Hard Disk, Memory, and Cache](ram_and_cache.assets/computer_storage_devices.png){ class="animation-figure" }
<p align="center"> Figure 4-10 &nbsp; Data Flow Between Hard Disk, Memory, and Cache </p>
## 4.4.2 &nbsp; Memory Efficiency of Data Structures
In terms of memory space utilization, arrays and linked lists have their advantages and limitations.
On one hand, **memory is limited and cannot be shared by multiple programs**, so we hope that data structures can use space as efficiently as possible. The elements of an array are tightly packed without extra space for storing references (pointers) between linked list nodes, making them more space-efficient. However, arrays require allocating sufficient continuous memory space at once, which may lead to memory waste, and array expansion also requires additional time and space costs. In contrast, linked lists allocate and reclaim memory dynamically on a per-node basis, providing greater flexibility.
On the other hand, during program execution, **as memory is repeatedly allocated and released, the degree of fragmentation of free memory becomes higher**, leading to reduced memory utilization efficiency. Arrays, due to their continuous storage method, are relatively less likely to cause memory fragmentation. In contrast, the elements of a linked list are dispersedly stored, and frequent insertion and deletion operations make memory fragmentation more likely.
## 4.4.3 &nbsp; Cache Efficiency of Data Structures
Although caches are much smaller in space capacity than memory, they are much faster and play a crucial role in program execution speed. Since the cache's capacity is limited and can only store a small part of frequently accessed data, when the CPU tries to access data not in the cache, a "cache miss" occurs, forcing the CPU to load the needed data from slower memory.
Clearly, **the fewer the cache misses, the higher the CPU's data read-write efficiency**, and the better the program performance. The proportion of successful data retrieval from the cache by the CPU is called the "cache hit rate," a metric often used to measure cache efficiency.
To achieve higher efficiency, caches adopt the following data loading mechanisms.
- **Cache Lines**: Caches don't store and load data byte by byte but in units of cache lines. Compared to byte-by-byte transfer, the transmission of cache lines is more efficient.
- **Prefetch Mechanism**: Processors try to predict data access patterns (such as sequential access, fixed stride jumping access, etc.) and load data into the cache according to specific patterns to improve the hit rate.
- **Spatial Locality**: If data is accessed, data nearby is likely to be accessed in the near future. Therefore, when loading certain data, the cache also loads nearby data to improve the hit rate.
- **Temporal Locality**: If data is accessed, it's likely to be accessed again in the near future. Caches use this principle to retain recently accessed data to improve the hit rate.
In fact, **arrays and linked lists have different cache utilization efficiencies**, mainly reflected in the following aspects.
- **Occupied Space**: Linked list elements occupy more space than array elements, resulting in less effective data volume in the cache.
- **Cache Lines**: Linked list data is scattered throughout memory, and since caches load "by line," the proportion of loading invalid data is higher.
- **Prefetch Mechanism**: The data access pattern of arrays is more "predictable" than that of linked lists, meaning the system is more likely to guess which data will be loaded next.
- **Spatial Locality**: Arrays are stored in concentrated memory spaces, so the data near the loaded data is more likely to be accessed next.
Overall, **arrays have a higher cache hit rate and are generally more efficient in operation than linked lists**. This makes data structures based on arrays more popular in solving algorithmic problems.
It should be noted that **high cache efficiency does not mean that arrays are always better than linked lists**. Which data structure to choose in actual applications should be based on specific requirements. For example, both arrays and linked lists can implement the "stack" data structure (which will be detailed in the next chapter), but they are suitable for different scenarios.
- In algorithm problems, we tend to choose stacks based on arrays because they provide higher operational efficiency and random access capabilities, with the only cost being the need to pre-allocate a certain amount of memory space for the array.
- If the data volume is very large, highly dynamic, and the expected size of the stack is difficult to estimate, then a stack based on a linked list is more appropriate. Linked lists can disperse a large amount of data in different parts of the memory and avoid the additional overhead of array expansion.

View File

@ -0,0 +1,85 @@
---
comments: true
---
# 4.5 &nbsp; Summary
### 1. &nbsp; Key Review
- Arrays and linked lists are two fundamental data structures, representing two storage methods in computer memory: continuous space storage and dispersed space storage. Their characteristics complement each other.
- Arrays support random access and use less memory; however, they are inefficient in inserting and deleting elements and have a fixed length after initialization.
- Linked lists implement efficient node insertion and deletion through changing references (pointers) and can flexibly adjust their length; however, they have lower node access efficiency and use more memory.
- Common types of linked lists include singly linked lists, circular linked lists, and doubly linked lists, each with its own application scenarios.
- Lists are ordered collections of elements that support addition, deletion, and modification, typically implemented based on dynamic arrays, retaining the advantages of arrays while allowing flexible length adjustment.
- The advent of lists significantly enhanced the practicality of arrays but may lead to some memory space wastage.
- During program execution, data is mainly stored in memory. Arrays provide higher memory space efficiency, while linked lists are more flexible in memory usage.
- Caches provide fast data access to CPUs through mechanisms like cache lines, prefetching, spatial locality, and temporal locality, significantly enhancing program execution efficiency.
- Due to higher cache hit rates, arrays are generally more efficient than linked lists. When choosing a data structure, the appropriate choice should be made based on specific needs and scenarios.
### 2. &nbsp; Q & A
!!! question "Does storing arrays on the stack versus the heap affect time and space efficiency?"
Arrays stored on both the stack and heap are stored in continuous memory spaces, and data operation efficiency is essentially the same. However, stacks and heaps have their own characteristics, leading to the following differences.
1. Allocation and release efficiency: The stack is a smaller memory block, allocated automatically by the compiler; the heap memory is relatively larger and can be dynamically allocated in the code, more prone to fragmentation. Therefore, allocation and release operations on the heap are generally slower than on the stack.
2. Size limitation: Stack memory is relatively small, while the heap size is generally limited by available memory. Therefore, the heap is more suitable for storing large arrays.
3. Flexibility: The size of arrays on the stack needs to be determined at compile-time, while the size of arrays on the heap can be dynamically determined at runtime.
!!! question "Why do arrays require elements of the same type, while linked lists do not emphasize same-type elements?"
Linked lists consist of nodes connected by references (pointers), and each node can store data of different types, such as int, double, string, object, etc.
In contrast, array elements must be of the same type, allowing the calculation of offsets to access the corresponding element positions. For example, an array containing both int and long types, with single elements occupying 4 bytes and 8 bytes respectively, cannot use the following formula to calculate offsets, as the array contains elements of two different lengths.
```shell
# Element memory address = Array memory address + Element length * Element index
```
!!! question "After deleting a node, is it necessary to set `P.next` to `None`?"
Not modifying `P.next` is also acceptable. From the perspective of the linked list, traversing from the head node to the tail node will no longer encounter `P`. This means that node `P` has been effectively removed from the list, and where `P` points no longer affects the list.
From a garbage collection perspective, for languages with automatic garbage collection mechanisms like Java, Python, and Go, whether node `P` is collected depends on whether there are still references pointing to it, not on the value of `P.next`. In languages like C and C++, we need to manually free the node's memory.
!!! question "In linked lists, the time complexity for insertion and deletion operations is `O(1)`. But searching for the element before insertion or deletion takes `O(n)` time, so why isn't the time complexity `O(n)`?"
If an element is searched first and then deleted, the time complexity is indeed `O(n)`. However, the `O(1)` advantage of linked lists in insertion and deletion can be realized in other applications. For example, in the implementation of double-ended queues using linked lists, we maintain pointers always pointing to the head and tail nodes, making each insertion and deletion operation `O(1)`.
!!! question "In the image 'Linked List Definition and Storage Method', do the light blue storage nodes occupy a single memory address, or do they share half with the node value?"
The diagram is just a qualitative representation; quantitative analysis depends on specific situations.
- Different types of node values occupy different amounts of space, such as int, long, double, and object instances.
- The memory space occupied by pointer variables depends on the operating system and compilation environment used, usually 8 bytes or 4 bytes.
!!! question "Is adding elements to the end of a list always `O(1)`?"
If adding an element exceeds the list length, the list needs to be expanded first. The system will request a new memory block and move all elements of the original list over, in which case the time complexity becomes `O(n)`.
!!! question "The statement 'The emergence of lists greatly improves the practicality of arrays, but may lead to some memory space wastage' - does this refer to the memory occupied by additional variables like capacity, length, and expansion multiplier?"
The space wastage here mainly refers to two aspects: on the one hand, lists are set with an initial length, which we may not always need; on the other hand, to prevent frequent expansion, expansion usually multiplies by a coefficient, such as $\times 1.5$. This results in many empty slots, which we typically cannot fully fill.
!!! question "In Python, after initializing `n = [1, 2, 3]`, the addresses of these 3 elements are contiguous, but initializing `m = [2, 1, 3]` shows that each element's `id` is not consecutive but identical to those in `n`. If the addresses of these elements are not contiguous, is `m` still an array?"
If we replace list elements with linked list nodes `n = [n1, n2, n3, n4, n5]`, these 5 node objects are also typically dispersed throughout memory. However, given a list index, we can still access the node's memory address in `O(1)` time, thereby accessing the corresponding node. This is because the array stores references to the nodes, not the nodes themselves.
Unlike many languages, in Python, numbers are also wrapped as objects, and lists store references to these numbers, not the numbers themselves. Therefore, we find that the same number in two arrays has the same `id`, and these numbers' memory addresses need not be contiguous.
!!! question "The `std::list` in C++ STL has already implemented a doubly linked list, but it seems that some algorithm books don't directly use it. Is there any limitation?"
On the one hand, we often prefer to use arrays to implement algorithms, only using linked lists when necessary, mainly for two reasons.
- Space overhead: Since each element requires two additional pointers (one for the previous element and one for the next), `std::list` usually occupies more space than `std::vector`.
- Cache unfriendly: As the data is not stored continuously, `std::list` has a lower cache utilization rate. Generally, `std::vector` performs better.
On the other hand, linked lists are primarily necessary for binary trees and graphs. Stacks and queues are often implemented using the programming language's `stack` and `queue` classes, rather than linked lists.
!!! question "Does initializing a list `res = [0] * self.size()` result in each element of `res` referencing the same address?"
No. However, this issue arises with two-dimensional arrays, for example, initializing a two-dimensional list `res = [[0] * self.size()]` would reference the same list `[0]` multiple times.
!!! question "In deleting a node, is it necessary to break the reference to its successor node?"
From the perspective of data structures and algorithms (problem-solving), it's okay not to break the link, as long as the program's logic is correct. From the perspective of standard libraries, breaking the link is safer and more logically clear. If the link is not broken, and the deleted node is not properly recycled, it could affect the recycling of the successor node's memory.

View File

@ -19,7 +19,7 @@ icon: material/timer-sand
## 本章内容
- [2.1 &nbsp; Evaluating Algorithm Efficiency](https://www.hello-algo.com/chapter_computational_complexity/performance_evaluation/)
- [2.1 &nbsp; Algorithm Efficiency Assessment](https://www.hello-algo.com/chapter_computational_complexity/performance_evaluation/)
- [2.2 &nbsp; Iteration and Recursion](https://www.hello-algo.com/chapter_computational_complexity/iteration_and_recursion/)
- [2.3 &nbsp; Time Complexity](https://www.hello-algo.com/chapter_computational_complexity/time_complexity/)
- [2.4 &nbsp; Space Complexity](https://www.hello-algo.com/chapter_computational_complexity/space_complexity/)

View File

@ -0,0 +1,172 @@
---
comments: true
---
# 3.2 &nbsp; Fundamental Data Types
When we think of data in computers, we imagine various forms like text, images, videos, voice, 3D models, etc. Despite their different organizational forms, they are all composed of various fundamental data types.
**Fundamental data types are those that the CPU can directly operate on** and are directly used in algorithms, mainly including the following.
- Integer types: `byte`, `short`, `int`, `long`.
- Floating-point types: `float`, `double`, used to represent decimals.
- Character type: `char`, used to represent letters, punctuation, and even emojis in various languages.
- Boolean type: `bool`, used for "yes" or "no" decisions.
**Fundamental data types are stored in computers in binary form**. One binary digit is equal to 1 bit. In most modern operating systems, 1 byte consists of 8 bits.
The range of values for fundamental data types depends on the size of the space they occupy. Below, we take Java as an example.
- The integer type `byte` occupies 1 byte = 8 bits and can represent $2^8$ numbers.
- The integer type `int` occupies 4 bytes = 32 bits and can represent $2^{32}$ numbers.
The following table lists the space occupied, value range, and default values of various fundamental data types in Java. This table does not need to be memorized, but understood roughly and referred to when needed.
<p align="center"> Table 3-1 &nbsp; Space Occupied and Value Range of Fundamental Data Types </p>
<div class="center-table" markdown>
| Type | Symbol | Space Occupied | Minimum Value | Maximum Value | Default Value |
| ------- | -------- | -------------- | ------------------------ | ----------------------- | -------------- |
| Integer | `byte` | 1 byte | $-2^7$ ($-128$) | $2^7 - 1$ ($127$) | 0 |
| | `short` | 2 bytes | $-2^{15}$ | $2^{15} - 1$ | 0 |
| | `int` | 4 bytes | $-2^{31}$ | $2^{31} - 1$ | 0 |
| | `long` | 8 bytes | $-2^{63}$ | $2^{63} - 1$ | 0 |
| Float | `float` | 4 bytes | $1.175 \times 10^{-38}$ | $3.403 \times 10^{38}$ | $0.0\text{f}$ |
| | `double` | 8 bytes | $2.225 \times 10^{-308}$ | $1.798 \times 10^{308}$ | 0.0 |
| Char | `char` | 2 bytes | 0 | $2^{16} - 1$ | 0 |
| Boolean | `bool` | 1 byte | $\text{false}$ | $\text{true}$ | $\text{false}$ |
</div>
Please note that the above table is specific to Java's fundamental data types. Each programming language has its own data type definitions, and their space occupied, value ranges, and default values may differ.
- In Python, the integer type `int` can be of any size, limited only by available memory; the floating-point `float` is double precision 64-bit; there is no `char` type, as a single character is actually a string `str` of length 1.
- C and C++ do not specify the size of fundamental data types, which varies with implementation and platform. The above table follows the LP64 [data model](https://en.cppreference.com/w/cpp/language/types#Properties), used for Unix 64-bit operating systems including Linux and macOS.
- The size of `char` in C and C++ is 1 byte, while in most programming languages, it depends on the specific character encoding method, as detailed in the "Character Encoding" chapter.
- Even though representing a boolean only requires 1 bit (0 or 1), it is usually stored in memory as 1 byte. This is because modern computer CPUs typically use 1 byte as the smallest addressable memory unit.
So, what is the connection between fundamental data types and data structures? We know that data structures are ways to organize and store data in computers. The focus here is on "structure" rather than "data".
If we want to represent "a row of numbers", we naturally think of using an array. This is because the linear structure of an array can represent the adjacency and order of numbers, but whether the stored content is an integer `int`, a decimal `float`, or a character `char`, is irrelevant to the "data structure".
In other words, **fundamental data types provide the "content type" of data, while data structures provide the "way of organizing" data**. For example, in the following code, we use the same data structure (array) to store and represent different fundamental data types, including `int`, `float`, `char`, `bool`, etc.
=== "Python"
```python title=""
# Using various fundamental data types to initialize arrays
numbers: list[int] = [0] * 5
decimals: list[float] = [0.0] * 5
# Python's characters are actually strings of length 1
characters: list[str] = ['0'] * 5
bools: list[bool] = [False] * 5
# Python's lists can freely store various fundamental data types and object references
data = [0, 0.0, 'a', False, ListNode(0)]
```
=== "C++"
```cpp title=""
// Using various fundamental data types to initialize arrays
int numbers[5];
float decimals[5];
char characters[5];
bool bools[5];
```
=== "Java"
```java title=""
// Using various fundamental data types to initialize arrays
int[] numbers = new int[5];
float[] decimals = new float[5];
char[] characters = new char[5];
boolean[] bools = new boolean[5];
```
=== "C#"
```csharp title=""
// Using various fundamental data types to initialize arrays
int[] numbers = new int[5];
float[] decimals = new float[5];
char[] characters = new char[5];
bool[] bools = new bool[5];
```
=== "Go"
```go title=""
// Using various fundamental data types to initialize arrays
var numbers = [5]int{}
var decimals = [5]float64{}
var characters = [5]byte{}
var bools = [5]bool{}
```
=== "Swift"
```swift title=""
// Using various fundamental data types to initialize arrays
let numbers = Array(repeating: Int(), count: 5)
let decimals = Array(repeating: Double(), count: 5)
let characters = Array(repeating: Character("a"), count: 5)
let bools = Array(repeating: Bool(), count: 5)
```
=== "JS"
```javascript title=""
// JavaScript's arrays can freely store various fundamental data types and objects
const array = [0, 0.0, 'a', false];
```
=== "TS"
```typescript title=""
// Using various fundamental data types to initialize arrays
const numbers: number[] = [];
const characters: string[] = [];
const bools: boolean[] = [];
```
=== "Dart"
```dart title=""
// Using various fundamental data types to initialize arrays
List<int> numbers = List.filled(5, 0);
List<double> decimals = List.filled(5, 0.0);
List<String> characters = List.filled(5, 'a');
List<bool> bools = List.filled(5, false);
```
=== "Rust"
```rust title=""
// Using various fundamental data types to initialize arrays
let numbers: Vec<i32> = vec![0; 5];
let decimals: Vec<f32> = vec![0.0, 5];
let characters: Vec<char> = vec!['0'; 5];
let bools: Vec<bool> = vec![false; 5];
```
=== "C"
```c title=""
// Using various fundamental data types to initialize arrays
int numbers[10];
float decimals[10];
char characters[10];
bool bools[10];
```
=== "Zig"
```zig title=""
// Using various fundamental data types to initialize arrays
var numbers: [5]i32 = undefined;
var decimals: [5]f32 = undefined;
var characters: [5]u8 = undefined;
var bools: [5]bool = undefined;
```

View File

@ -0,0 +1,97 @@
---
comments: true
---
# 3.4 &nbsp; Character Encoding *
In computers, all data is stored in binary form, and the character `char` is no exception. To represent characters, we need to establish a "character set" that defines a one-to-one correspondence between each character and binary numbers. With a character set, computers can convert binary numbers to characters by looking up a table.
## 3.4.1 &nbsp; ASCII Character Set
The "ASCII code" is one of the earliest character sets, officially known as the American Standard Code for Information Interchange. It uses 7 binary digits (the lower 7 bits of a byte) to represent a character, allowing for a maximum of 128 different characters. As shown in the Figure 3-6 , ASCII includes uppercase and lowercase English letters, numbers 0 ~ 9, some punctuation marks, and some control characters (such as newline and tab).
![ASCII Code](character_encoding.assets/ascii_table.png){ class="animation-figure" }
<p align="center"> Figure 3-6 &nbsp; ASCII Code </p>
However, **ASCII can only represent English characters**. With the globalization of computers, a character set called "EASCII" was developed to represent more languages. It expands on the 7-bit basis of ASCII to 8 bits, enabling the representation of 256 different characters.
Globally, a series of EASCII character sets for different regions emerged. The first 128 characters of these sets are uniformly ASCII, while the remaining 128 characters are defined differently to cater to various language requirements.
## 3.4.2 &nbsp; GBK Character Set
Later, it was found that **EASCII still could not meet the character requirements of many languages**. For instance, there are nearly a hundred thousand Chinese characters, with several thousand used in everyday life. In 1980, China's National Standards Bureau released the "GB2312" character set, which included 6763 Chinese characters, essentially meeting the computer processing needs for Chinese.
However, GB2312 could not handle some rare and traditional characters. The "GBK" character set, an expansion of GB2312, includes a total of 21886 Chinese characters. In the GBK encoding scheme, ASCII characters are represented with one byte, while Chinese characters use two bytes.
## 3.4.3 &nbsp; Unicode Character Set
With the rapid development of computer technology and a plethora of character sets and encoding standards, numerous problems arose. On one hand, these character sets generally only defined characters for specific languages and could not function properly in multilingual environments. On the other hand, the existence of multiple character set standards for the same language caused garbled text when information was exchanged between computers using different encoding standards.
Researchers of that era thought: **What if we introduced a comprehensive character set that included all languages and symbols worldwide, wouldn't that solve the problems of cross-language environments and garbled text?** Driven by this idea, the extensive character set, Unicode, was born.
The Chinese name for "Unicode" is "统一码" (Unified Code), theoretically capable of accommodating over a million characters. It aims to incorporate characters from all over the world into a single set, providing a universal character set for processing and displaying various languages and reducing the issues of garbled text due to different encoding standards.
Since its release in 1991, Unicode has continually expanded to include new languages and characters. As of September 2022, Unicode contains 149,186 characters, including characters, symbols, and even emojis from various languages. In the vast Unicode character set, commonly used characters occupy 2 bytes, while some rare characters take up 3 or even 4 bytes.
Unicode is a universal character set that assigns a number (called a "code point") to each character, **but it does not specify how these character code points should be stored in a computer**. One might ask: When Unicode code points of varying lengths appear in a text, how does the system parse the characters? For example, given a 2-byte code, how does the system determine if it represents a single 2-byte character or two 1-byte characters?
A straightforward solution to this problem is to store all characters as equal-length encodings. As shown in the Figure 3-7 , each character in "Hello" occupies 1 byte, while each character in "算法" (algorithm) occupies 2 bytes. We could encode all characters in "Hello 算法" as 2 bytes by padding the higher bits with zeros. This way, the system can parse a character every 2 bytes, recovering the content of the phrase.
![Unicode Encoding Example](character_encoding.assets/unicode_hello_algo.png){ class="animation-figure" }
<p align="center"> Figure 3-7 &nbsp; Unicode Encoding Example </p>
However, as ASCII has shown us, encoding English only requires 1 byte. Using the above approach would double the space occupied by English text compared to ASCII encoding, which is a waste of memory space. Therefore, a more efficient Unicode encoding method is needed.
## 3.4.4 &nbsp; UTF-8 Encoding
Currently, UTF-8 has become the most widely used Unicode encoding method internationally. **It is a variable-length encoding**, using 1 to 4 bytes to represent a character, depending on the complexity of the character. ASCII characters need only 1 byte, Latin and Greek letters require 2 bytes, commonly used Chinese characters need 3 bytes, and some other rare characters need 4 bytes.
The encoding rules for UTF-8 are not complex and can be divided into two cases:
- For 1-byte characters, set the highest bit to $0$, and the remaining 7 bits to the Unicode code point. Notably, ASCII characters occupy the first 128 code points in the Unicode set. This means that **UTF-8 encoding is backward compatible with ASCII**. This implies that UTF-8 can be used to parse ancient ASCII text.
- For characters of length $n$ bytes (where $n > 1$), set the highest $n$ bits of the first byte to $1$, and the $(n + 1)^{\text{th}}$ bit to $0$; starting from the second byte, set the highest 2 bits of each byte to $10$; the rest of the bits are used to fill the Unicode code point.
The Figure 3-8 shows the UTF-8 encoding for "Hello算法". It can be observed that since the highest $n$ bits are set to $1$, the system can determine the length of the character as $n$ by counting the number of highest bits set to $1$.
But why set the highest 2 bits of the remaining bytes to $10$? Actually, this $10$ serves as a kind of checksum. If the system starts parsing text from an incorrect byte, the $10$ at the beginning of the byte can help the system quickly detect an anomaly.
The reason for using $10$ as a checksum is that, under UTF-8 encoding rules, it's impossible for the highest two bits of a character to be $10$. This can be proven by contradiction: If the highest two bits of a character are $10$, it indicates that the character's length is $1$, corresponding to ASCII. However, the highest bit of an ASCII character should be $0$, contradicting the assumption.
![UTF-8 Encoding Example](character_encoding.assets/utf-8_hello_algo.png){ class="animation-figure" }
<p align="center"> Figure 3-8 &nbsp; UTF-8 Encoding Example </p>
Apart from UTF-8, other common encoding methods include:
- **UTF-16 Encoding**: Uses 2 or 4 bytes to represent a character. All ASCII characters and commonly used non-English characters are represented with 2 bytes; a few characters require 4 bytes. For 2-byte characters, the UTF-16 encoding is equal to the Unicode code point.
- **UTF-32 Encoding**: Every character uses 4 bytes. This means UTF-32 occupies more space than UTF-8 and UTF-16, especially for texts with a high proportion of ASCII characters.
From the perspective of storage space, UTF-8 is highly efficient for representing English characters, requiring only 1 byte; UTF-16 might be more efficient for encoding some non-English characters (like Chinese), as it requires only 2 bytes, while UTF-8 might need 3 bytes.
From a compatibility standpoint, UTF-8 is the most versatile, with many tools and libraries supporting UTF-8 as a priority.
## 3.4.5 &nbsp; Character Encoding in Programming Languages
In many classic programming languages, strings during program execution are encoded using fixed-length encodings like UTF-16 or UTF-32. This allows strings to be treated as arrays, offering several advantages:
- **Random Access**: Strings encoded in UTF-16 can be accessed randomly with ease. For UTF-8, which is a variable-length encoding, locating the $i^{th}$ character requires traversing the string from the start to the $i^{th}$ position, taking $O(n)$ time.
- **Character Counting**: Similar to random access, counting the number of characters in a UTF-16 encoded string is an $O(1)$ operation. However, counting characters in a UTF-8 encoded string requires traversing the entire string.
- **String Operations**: Many string operations like splitting, concatenating, inserting, and deleting are easier on UTF-16 encoded strings. These operations generally require additional computation on UTF-8 encoded strings to ensure the validity of the UTF-8 encoding.
The design of character encoding schemes in programming languages is an interesting topic involving various factors:
- Javas `String` type uses UTF-16 encoding, with each character occupying 2 bytes. This was based on the initial belief that 16 bits were sufficient to represent all possible characters, a judgment later proven incorrect. As the Unicode standard expanded beyond 16 bits, characters in Java may now be represented by a pair of 16-bit values, known as “surrogate pairs.”
- JavaScript and TypeScript use UTF-16 encoding for similar reasons as Java. When JavaScript was first introduced by Netscape in 1995, Unicode was still in its early stages, and 16-bit encoding was sufficient to represent all Unicode characters.
- C# uses UTF-16 encoding, largely because the .NET platform, designed by Microsoft, and many Microsoft technologies, including the Windows operating system, extensively use UTF-16 encoding.
Due to the underestimation of character counts, these languages had to resort to using "surrogate pairs" to represent Unicode characters exceeding 16 bits. This approach has its drawbacks: strings containing surrogate pairs may have characters occupying 2 or 4 bytes, losing the advantage of fixed-length encoding, and handling surrogate pairs adds to the complexity and debugging difficulty of programming.
Owing to these reasons, some programming languages have adopted different encoding schemes:
- Pythons `str` type uses Unicode encoding with a flexible representation where the storage length of characters depends on the largest Unicode code point in the string. If all characters are ASCII, each character occupies 1 byte; if characters exceed ASCII but are within the Basic Multilingual Plane (BMP), each occupies 2 bytes; if characters exceed the BMP, each occupies 4 bytes.
- Gos `string` type internally uses UTF-8 encoding. Go also provides the `rune` type for representing individual Unicode code points.
- Rusts `str` and `String` types use UTF-8 encoding internally. Rust also offers the `char` type for individual Unicode code points.
Its important to note that the above discussion pertains to how strings are stored in programming languages, **which is a different issue from how strings are stored in files or transmitted over networks**. For file storage or network transmission, strings are usually encoded in UTF-8 format for optimal compatibility and space efficiency.

View File

@ -1,49 +1,58 @@
# Classification Of Data Structures
---
comments: true
---
Common data structures include arrays, linked lists, stacks, queues, hash tables, trees, heaps, and graphs. They can be divided into two categories: logical structure and physical structure.
# 3.1 &nbsp; Classification of Data Structures
## Logical Structures: Linear And Non-linear
Common data structures include arrays, linked lists, stacks, queues, hash tables, trees, heaps, and graphs. They can be classified into two dimensions: "Logical Structure" and "Physical Structure".
**Logical structures reveal logical relationships between data elements**. In arrays and linked lists, data are arranged in sequential order, reflecting the linear relationship between data; while in trees, data are arranged hierarchically from the top down, showing the derived relationship between ancestors and descendants; and graphs are composed of nodes and edges, reflecting the complex network relationship.
## 3.1.1 &nbsp; Logical Structure: Linear and Non-Linear
As shown in the figure below, logical structures can further be divided into "linear data structure" and "non-linear data structure". Linear data structures are more intuitive, meaning that the data are arranged linearly in terms of logical relationships; non-linear data structures, on the other hand, are arranged non-linearly.
**The logical structure reveals the logical relationships between data elements**. In arrays and linked lists, data is arranged in a certain order, reflecting a linear relationship between them. In trees, data is arranged from top to bottom in layers, showing a "ancestor-descendant" hierarchical relationship. Graphs, consisting of nodes and edges, represent complex network relationships.
- **Linear data structures**: arrays, linked lists, stacks, queues, hash tables.
- **Nonlinear data structures**: trees, heaps, graphs, hash tables.
As shown in the Figure 3-1 , logical structures can be divided into two major categories: "Linear" and "Non-linear". Linear structures are more intuitive, indicating data is arranged linearly in logical relationships; non-linear structures, conversely, are arranged non-linearly.
![Linear and nonlinear data structures](classification_of_data_structure.assets/classification_logic_structure.png)
- **Linear Data Structures**: Arrays, Linked Lists, Stacks, Queues, Hash Tables.
- **Non-Linear Data Structures**: Trees, Heaps, Graphs, Hash Tables.
Non-linear data structures can be further divided into tree and graph structures.
![Linear and Non-Linear Data Structures](classification_of_data_structure.assets/classification_logic_structure.png){ class="animation-figure" }
- **Linear structures**: arrays, linked lists, queues, stacks, hash tables, with one-to-one sequential relationship between elements.
- **Tree structure**: tree, heap, hash table, with one-to-many relationship between elements.
- **Graph**: graph with many-to-many relationship between elements.
<p align="center"> Figure 3-1 &nbsp; Linear and Non-Linear Data Structures </p>
## Physical Structure: Continuous vs. Dispersed
Non-linear data structures can be further divided into tree structures and network structures.
**When an algorithm is running, the data being processed is stored in memory**. The figure below shows a computer memory module where each black square represents a memory space. We can think of the memory as a giant Excel sheet in which each cell can store data of a certain size.
- **Tree Structures**: Trees, Heaps, Hash Tables, where elements have one-to-many relationships.
- **Network Structures**: Graphs, where elements have many-to-many relationships.
**The system accesses the data at the target location by means of a memory address**. As shown in the figure below, the computer assigns a unique identifier to each cell in the table according to specific rules, ensuring that each memory space has a unique memory address. With these addresses, the program can access the data in memory.
## 3.1.2 &nbsp; Physical Structure: Contiguous and Dispersed
![memory_strip, memory_space, memory_address](classification_of_data_structure.assets/computer_memory_location.png)
**When an algorithm program runs, the data being processed is mainly stored in memory**. The following figure shows a computer memory stick, each black block containing a memory space. We can imagine memory as a huge Excel spreadsheet, where each cell can store a certain amount of data.
**The system accesses data at the target location through memory addresses**. As shown in the Figure 3-2 , the computer allocates numbers to each cell in the table according to specific rules, ensuring each memory space has a unique memory address. With these addresses, programs can access data in memory.
![Memory Stick, Memory Spaces, Memory Addresses](classification_of_data_structure.assets/computer_memory_location.png){ class="animation-figure" }
<p align="center"> Figure 3-2 &nbsp; Memory Stick, Memory Spaces, Memory Addresses </p>
!!! tip
It is worth noting that comparing memory to the Excel sheet is a simplified analogy. The actual memory working mechanism is more complicated, involving the concepts of address, space, memory management, cache mechanism, virtual and physical memory.
It's worth noting that comparing memory to an Excel spreadsheet is a simplified analogy. The actual working mechanism of memory is more complex, involving concepts like address space, memory management, cache mechanisms, virtual memory, and physical memory.
Memory is a shared resource for all programs, and when a block of memory is occupied by one program, it cannot be used by other programs at the same time. **Therefore, considering memory resources is crucial in designing data structures and algorithms**. For example, the algorithm's peak memory usage should not exceed the remaining free memory of the system; if there is a lack of contiguous memory blocks, then the data structure chosen must be able to be stored in non-contiguous memory blocks.
Memory is a shared resource for all programs. When a block of memory is occupied by one program, it cannot be used by others simultaneously. **Therefore, memory resources are an important consideration in the design of data structures and algorithms**. For example, the peak memory usage of an algorithm should not exceed the system's remaining free memory. If there is a lack of contiguous large memory spaces, the chosen data structure must be able to store data in dispersed memory spaces.
As shown in the figure below, **Physical structure reflects the way data is stored in computer memory and it can be divided into consecutive space storage (arrays) and distributed space storage (linked lists)**. The physical structure determines how data is accessed, updated, added, deleted, etc. Logical and physical structure complement each other in terms of time efficiency and space efficiency.
As shown in the Figure 3-3 , **the physical structure reflects how data is stored in computer memory**, which can be divided into contiguous space storage (arrays) and dispersed space storage (linked lists). The physical structure determines from the bottom level how data is accessed, updated, added, or deleted. Both types of physical structures exhibit complementary characteristics in terms of time efficiency and space efficiency.
![continuous vs. decentralized spatial storage](classification_of_data_structure.assets/classification_phisical_structure.png)
![Contiguous Space Storage and Dispersed Space Storage](classification_of_data_structure.assets/classification_phisical_structure.png){ class="animation-figure" }
**It is worth stating that all data structures are implemented based on arrays, linked lists, or a combination of the two**. For example, stacks and queues can be implemented using both arrays and linked lists; and implementations of hash tables may contain both arrays and linked lists.
<p align="center"> Figure 3-3 &nbsp; Contiguous Space Storage and Dispersed Space Storage </p>
- **Array-based structures**: stacks, queues, hash tables, trees, heaps, graphs, matrices, tensors (arrays of dimension $\geq 3$), and so on.
- **Linked list-based structures**: stacks, queues, hash tables, trees, heaps, graphs, etc.
It's important to note that **all data structures are implemented based on arrays, linked lists, or a combination of both**. For example, stacks and queues can be implemented using either arrays or linked lists; while hash tables may include both arrays and linked lists.
Data structures based on arrays are also known as "static data structures", which means that such structures' length remains constant after initialization. In contrast, data structures based on linked lists are called "dynamic data structures", meaning that their length can be adjusted during program execution after initialization.
- **Array-based Implementations**: Stacks, Queues, Hash Tables, Trees, Heaps, Graphs, Matrices, Tensors (arrays with dimensions $\geq 3$).
- **Linked List-based Implementations**: Stacks, Queues, Hash Tables, Trees, Heaps, Graphs, etc.
Data structures implemented based on arrays are also called “Static Data Structures,” meaning their length cannot be changed after initialization. Conversely, those based on linked lists are called “Dynamic Data Structures,” which can still adjust their size during program execution.
!!! tip
If you find it difficult to understand the physical structure, it is recommended that you read the next chapter, "Arrays and Linked Lists," before reviewing this section.
If you find it difficult to understand the physical structure, it's recommended to read the next chapter first and then revisit this section.

View File

@ -1,13 +1,26 @@
# Data Structure
---
comments: true
icon: material/shape-outline
---
# Chapter 3. &nbsp; Data Structures
<div class="center-table" markdown>
![data structure](../assets/covers/chapter_data_structure.jpg)
![Data Structures](../assets/covers/chapter_data_structure.jpg){ class="cover-image" }
</div>
!!! abstract
Data structures resemble a stable and diverse framework.
Data structures serve as a robust and diverse framework.
They serve as a blueprint for organizing data orderly, enabling algorithms to come to life upon this foundation.
They offer a blueprint for the orderly organization of data, upon which algorithms come to life.
## 本章内容
- [3.1 &nbsp; Classification of Data Structures](https://www.hello-algo.com/chapter_data_structure/classification_of_data_structure/)
- [3.2 &nbsp; Fundamental Data Types](https://www.hello-algo.com/chapter_data_structure/basic_data_types/)
- [3.3 &nbsp; Number Encoding *](https://www.hello-algo.com/chapter_data_structure/number_encoding/)
- [3.4 &nbsp; Character Encoding *](https://www.hello-algo.com/chapter_data_structure/character_encoding/)
- [3.5 &nbsp; Summary](https://www.hello-algo.com/chapter_data_structure/summary/)

View File

@ -0,0 +1,162 @@
---
comments: true
---
# 3.3 &nbsp; Number Encoding *
!!! note
In this book, chapters marked with an * symbol are optional reads. If you are short on time or find them challenging, you may skip these initially and return to them after completing the essential chapters.
## 3.3.1 &nbsp; Integer Encoding
In the table from the previous section, we noticed that all integer types can represent one more negative number than positive numbers, such as the `byte` range of $[-128, 127]$. This phenomenon, somewhat counterintuitive, is rooted in the concepts of sign-magnitude, one's complement, and two's complement encoding.
Firstly, it's important to note that **numbers are stored in computers using the two's complement form**. Before analyzing why this is the case, let's define these three encoding methods:
- **Sign-magnitude**: The highest bit of a binary representation of a number is considered the sign bit, where $0$ represents a positive number and $1$ represents a negative number. The remaining bits represent the value of the number.
- **One's complement**: The one's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by inverting all bits except the sign bit.
- **Two's complement**: The two's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by adding $1$ to their one's complement.
The following diagram illustrates the conversions among sign-magnitude, one's complement, and two's complement:
![Conversions between Sign-Magnitude, One's Complement, and Two's Complement](number_encoding.assets/1s_2s_complement.png){ class="animation-figure" }
<p align="center"> Figure 3-4 &nbsp; Conversions between Sign-Magnitude, One's Complement, and Two's Complement </p>
Although sign-magnitude is the most intuitive, it has limitations. For one, **negative numbers in sign-magnitude cannot be directly used in calculations**. For example, in sign-magnitude, calculating $1 + (-2)$ results in $-3$, which is incorrect.
$$
\begin{aligned}
& 1 + (-2) \newline
& \rightarrow 0000 \; 0001 + 1000 \; 0010 \newline
& = 1000 \; 0011 \newline
& \rightarrow -3
\end{aligned}
$$
To address this, computers introduced the **one's complement**. If we convert to one's complement and calculate $1 + (-2)$, then convert the result back to sign-magnitude, we get the correct result of $-1$.
$$
\begin{aligned}
& 1 + (-2) \newline
& \rightarrow 0000 \; 0001 \; \text{(Sign-magnitude)} + 1000 \; 0010 \; \text{(Sign-magnitude)} \newline
& = 0000 \; 0001 \; \text{(One's complement)} + 1111 \; 1101 \; \text{(One's complement)} \newline
& = 1111 \; 1110 \; \text{(One's complement)} \newline
& = 1000 \; 0001 \; \text{(Sign-magnitude)} \newline
& \rightarrow -1
\end{aligned}
$$
Additionally, **there are two representations of zero in sign-magnitude**: $+0$ and $-0$. This means two different binary encodings for zero, which could lead to ambiguity. For example, in conditional checks, not differentiating between positive and negative zero might result in incorrect outcomes. Addressing this ambiguity would require additional checks, potentially reducing computational efficiency.
$$
\begin{aligned}
+0 & \rightarrow 0000 \; 0000 \newline
-0 & \rightarrow 1000 \; 0000
\end{aligned}
$$
Like sign-magnitude, one's complement also suffers from the positive and negative zero ambiguity. Therefore, computers further introduced the **two's complement**. Let's observe the conversion process for negative zero in sign-magnitude, one's complement, and two's complement:
$$
\begin{aligned}
-0 \rightarrow \; & 1000 \; 0000 \; \text{(Sign-magnitude)} \newline
= \; & 1111 \; 1111 \; \text{(One's complement)} \newline
= 1 \; & 0000 \; 0000 \; \text{(Two's complement)} \newline
\end{aligned}
$$
Adding $1$ to the one's complement of negative zero produces a carry, but with `byte` length being only 8 bits, the carried-over $1$ to the 9th bit is discarded. Therefore, **the two's complement of negative zero is $0000 \; 0000$**, the same as positive zero, thus resolving the ambiguity.
One last puzzle is the $[-128, 127]$ range for `byte`, with an additional negative number, $-128$. We observe that for the interval $[-127, +127]$, all integers have corresponding sign-magnitude, one's complement, and two's complement, and these can be converted between each other.
However, **the two's complement $1000 \; 0000$ is an exception without a corresponding sign-magnitude**. According to the conversion method, its sign-magnitude would be $0000 \; 0000$, which is a contradiction since this represents zero, and its two's complement should be itself. Computers designate this special two's complement $1000 \; 0000$ as representing $-128$. In fact, the calculation of $(-1) + (-127)$ in two's complement results in $-128$.
$$
\begin{aligned}
& (-127) + (-1) \newline
& \rightarrow 1111 \; 1111 \; \text{(Sign-magnitude)} + 1000 \; 0001 \; \text{(Sign-magnitude)} \newline
& = 1000 \; 0000 \; \text{(One's complement)} + 1111 \; 1110 \; \text{(One's complement)} \newline
& = 1000 \; 0001 \; \text{(Two's complement)} + 1111 \; 1111 \; \text{(Two's complement)} \newline
& = 1000 \; 0000 \; \text{(Two's complement)} \newline
& \rightarrow -128
\end{aligned}
$$
As you might have noticed, all these calculations are additions, hinting at an important fact: **computers' internal hardware circuits are primarily designed around addition operations**. This is because addition is simpler to implement in hardware compared to other operations like multiplication, division, and subtraction, allowing for easier parallelization and faster computation.
It's important to note that this doesn't mean computers can only perform addition. **By combining addition with basic logical operations, computers can execute a variety of other mathematical operations**. For example, the subtraction $a - b$ can be translated into $a + (-b)$; multiplication and division can be translated into multiple additions or subtractions.
We can now summarize the reason for using two's complement in computers: with two's complement representation, computers can use the same circuits and operations to handle both positive and negative number addition, eliminating the need for special hardware circuits for subtraction and avoiding the ambiguity of positive and negative zero. This greatly simplifies hardware design and enhances computational efficiency.
The design of two's complement is quite ingenious, and due to space constraints, we'll stop here. Interested readers are encouraged to explore further.
## 3.3.2 &nbsp; Floating-Point Number Encoding
You might have noticed something intriguing: despite having the same length of 4 bytes, why does a `float` have a much larger range of values compared to an `int`? This seems counterintuitive, as one would expect the range to shrink for `float` since it needs to represent fractions.
In fact, **this is due to the different representation method used by floating-point numbers (`float`)**. Let's consider a 32-bit binary number as:
$$
b_{31} b_{30} b_{29} \ldots b_2 b_1 b_0
$$
According to the IEEE 754 standard, a 32-bit `float` consists of the following three parts:
- Sign bit $\mathrm{S}$: Occupies 1 bit, corresponding to $b_{31}$.
- Exponent bit $\mathrm{E}$: Occupies 8 bits, corresponding to $b_{30} b_{29} \ldots b_{23}$.
- Fraction bit $\mathrm{N}$: Occupies 23 bits, corresponding to $b_{22} b_{21} \ldots b_0$.
The value of a binary `float` number is calculated as:
$$
\text{val} = (-1)^{b_{31}} \times 2^{\left(b_{30} b_{29} \ldots b_{23}\right)_2 - 127} \times \left(1 . b_{22} b_{21} \ldots b_0\right)_2
$$
Converted to a decimal formula, this becomes:
$$
\text{val} = (-1)^{\mathrm{S}} \times 2^{\mathrm{E} - 127} \times (1 + \mathrm{N})
$$
The range of each component is:
$$
\begin{aligned}
\mathrm{S} \in & \{ 0, 1\}, \quad \mathrm{E} \in \{ 1, 2, \dots, 254 \} \newline
(1 + \mathrm{N}) = & (1 + \sum_{i=1}^{23} b_{23-i} \times 2^{-i}) \subset [1, 2 - 2^{-23}]
\end{aligned}
$$
![Example Calculation of a float in IEEE 754 Standard](number_encoding.assets/ieee_754_float.png){ class="animation-figure" }
<p align="center"> Figure 3-5 &nbsp; Example Calculation of a float in IEEE 754 Standard </p>
Observing the diagram, given an example data $\mathrm{S} = 0$, $\mathrm{E} = 124$, $\mathrm{N} = 2^{-2} + 2^{-3} = 0.375$, we have:
$$
\text{val} = (-1)^0 \times 2^{124 - 127} \times (1 + 0.375) = 0.171875
$$
Now we can answer the initial question: **The representation of `float` includes an exponent bit, leading to a much larger range than `int`**. Based on the above calculation, the maximum positive number representable by `float` is approximately $2^{254 - 127} \times (2 - 2^{-23}) \approx 3.4 \times 10^{38}$, and the minimum negative number is obtained by switching the sign bit.
**However, the trade-off for `float`'s expanded range is a sacrifice in precision**. The integer type `int` uses all 32 bits to represent the number, with values evenly distributed; but due to the exponent bit, the larger the value of a `float`, the greater the difference between adjacent numbers.
As shown in the Table 3-2 , exponent bits $E = 0$ and $E = 255$ have special meanings, **used to represent zero, infinity, $\mathrm{NaN}$, etc.**
<p align="center"> Table 3-2 &nbsp; Meaning of Exponent Bits </p>
<div class="center-table" markdown>
| Exponent Bit E | Fraction Bit $\mathrm{N} = 0$ | Fraction Bit $\mathrm{N} \ne 0$ | Calculation Formula |
| ------------------ | ----------------------------- | ------------------------------- | ---------------------------------------------------------------------- |
| $0$ | $\pm 0$ | Subnormal Numbers | $(-1)^{\mathrm{S}} \times 2^{-126} \times (0.\mathrm{N})$ |
| $1, 2, \dots, 254$ | Normal Numbers | Normal Numbers | $(-1)^{\mathrm{S}} \times 2^{(\mathrm{E} -127)} \times (1.\mathrm{N})$ |
| $255$ | $\pm \infty$ | $\mathrm{NaN}$ | |
</div>
It's worth noting that subnormal numbers significantly improve the precision of floating-point numbers. The smallest positive normal number is $2^{-126}$, and the smallest positive subnormal number is $2^{-126} \times 2^{-23}$.
Double-precision `double` also uses a similar representation method to `float`, which is not elaborated here for brevity.

View File

@ -0,0 +1,37 @@
---
comments: true
---
# 3.5 &nbsp; Summary
### 1. &nbsp; Key Review
- Data structures can be categorized from two perspectives: logical structure and physical structure. Logical structure describes the logical relationships between data elements, while physical structure describes how data is stored in computer memory.
- Common logical structures include linear, tree-like, and network structures. We generally classify data structures into linear (arrays, linked lists, stacks, queues) and non-linear (trees, graphs, heaps) based on their logical structure. The implementation of hash tables may involve both linear and non-linear data structures.
- When a program runs, data is stored in computer memory. Each memory space has a corresponding memory address, and the program accesses data through these addresses.
- Physical structures are primarily divided into contiguous space storage (arrays) and dispersed space storage (linked lists). All data structures are implemented using arrays, linked lists, or a combination of both.
- Basic data types in computers include integers (`byte`, `short`, `int`, `long`), floating-point numbers (`float`, `double`), characters (`char`), and booleans (`boolean`). Their range depends on the size of the space occupied and the representation method.
- Original code, complement code, and two's complement code are three methods of encoding numbers in computers, and they can be converted into each other. The highest bit of the original code of an integer is the sign bit, and the remaining bits represent the value of the number.
- Integers are stored in computers in the form of two's complement. In this representation, the computer can treat the addition of positive and negative numbers uniformly, without the need for special hardware circuits for subtraction, and there is no ambiguity of positive and negative zero.
- The encoding of floating-point numbers consists of 1 sign bit, 8 exponent bits, and 23 fraction bits. Due to the presence of the exponent bit, the range of floating-point numbers is much greater than that of integers, but at the cost of sacrificing precision.
- ASCII is the earliest English character set, 1 byte in length, and includes 127 characters. The GBK character set is a commonly used Chinese character set, including more than 20,000 Chinese characters. Unicode strives to provide a complete character set standard, including characters from various languages worldwide, thus solving the problem of garbled characters caused by inconsistent character encoding methods.
- UTF-8 is the most popular Unicode encoding method, with excellent universality. It is a variable-length encoding method with good scalability and effectively improves the efficiency of space usage. UTF-16 and UTF-32 are fixed-length encoding methods. When encoding Chinese characters, UTF-16 occupies less space than UTF-8. Programming languages like Java and C# use UTF-16 encoding by default.
### 2. &nbsp; Q & A
!!! question "Why does a hash table contain both linear and non-linear data structures?"
The underlying structure of a hash table is an array. To resolve hash collisions, we may use "chaining": each bucket in the array points to a linked list, which, when exceeding a certain threshold, might be transformed into a tree (usually a red-black tree).
From a storage perspective, the foundation of a hash table is an array, where each bucket slot might contain a value, a linked list, or a tree. Therefore, hash tables may contain both linear data structures (arrays, linked lists) and non-linear data structures (trees).
!!! question "Is the length of the `char` type 1 byte?"
The length of the `char` type is determined by the encoding method used by the programming language. For example, Java, JavaScript, TypeScript, and C# all use UTF-16 encoding (to save Unicode code points), so the length of the char type is 2 bytes.
!!! question "Is there ambiguity in calling data structures based on arrays 'static data structures'? Because operations like push and pop on stacks are 'dynamic.'"
While stacks indeed allow for dynamic data operations, the data structure itself remains "static" (with unchangeable length). Even though data structures based on arrays can dynamically add or remove elements, their capacity is fixed. If the data volume exceeds the pre-allocated size, a new, larger array needs to be created, and the contents of the old array copied into it.
!!! question "When building stacks (queues) without specifying their size, why are they considered 'static data structures'?"
In high-level programming languages, we don't need to manually specify the initial capacity of stacks (queues); this task is automatically handled internally by the class. For example, the initial capacity of Java's ArrayList is usually 10. Furthermore, the expansion operation is also implemented automatically. See the subsequent "List" chapter for details.

View File

@ -19,6 +19,6 @@ icon: material/calculator-variant-outline
## 本章内容
- [1.1 &nbsp; Algorithms Are Everywhere](https://www.hello-algo.com/chapter_introduction/algorithms_are_everywhere/)
- [1.2 &nbsp; What Is Algorithms](https://www.hello-algo.com/chapter_introduction/what_is_dsa/)
- [1.1 &nbsp; Algorithms are Everywhere](https://www.hello-algo.com/chapter_introduction/algorithms_are_everywhere/)
- [1.2 &nbsp; What is an Algorithm](https://www.hello-algo.com/chapter_introduction/what_is_dsa/)
- [1.3 &nbsp; Summary](https://www.hello-algo.com/chapter_introduction/summary/)

View File

@ -19,6 +19,6 @@ icon: material/book-open-outline
## 本章内容
- [0.1 &nbsp; The Book](https://www.hello-algo.com/chapter_preface/about_the_book/)
- [0.1 &nbsp; About This Book](https://www.hello-algo.com/chapter_preface/about_the_book/)
- [0.2 &nbsp; How to Read](https://www.hello-algo.com/chapter_preface/suggestions/)
- [0.3 &nbsp; Summary](https://www.hello-algo.com/chapter_preface/summary/)

View File

@ -2,7 +2,7 @@
comments: true
---
# 0.2 &nbsp; How To Read
# 0.2 &nbsp; How to Read
!!! tip

View File

@ -92,6 +92,6 @@ UTF-8 的编码规则并不复杂,分为以下两种情况。
- Python 中的 `str` 使用 Unicode 编码,并采用一种灵活的字符串表示,存储的字符长度取决于字符串中最大的 Unicode 码点。若字符串中全部是 ASCII 字符,则每个字符占用 1 字节;如果有字符超出了 ASCII 范围但全部在基本多语言平面BMP则每个字符占用 2 字节;如果有超出 BMP 的字符,则每个字符占用 4 字节。
- Go 语言的 `string` 类型在内部使用 UTF-8 编码。Go 语言还提供了 `rune` 类型,它用于表示单个 Unicode 码点。
- Rust 语言的 str 和 String 类型在内部使用 UTF-8 编码。Rust 也提供了 `char` 类型,用于表示单个 Unicode 码点。
- Rust 语言的 `str``String` 类型在内部使用 UTF-8 编码。Rust 也提供了 `char` 类型,用于表示单个 Unicode 码点。
需要注意的是,以上讨论的都是字符串在编程语言中的存储方式,**这和字符串如何在文件中存储或在网络中传输是不同的问题**。在文件存储或网络传输中,我们通常会将字符串编码为 UTF-8 格式,以达到最优的兼容性和空间效率。

View File

@ -569,7 +569,7 @@ comments: true
if nums[m as usize] < target { // 此情况说明 target 在区间 [m+1, j) 中
i = m + 1;
} else if nums[m as usize] > target { // 此情况说明 target 在区间 [i, m) 中
j = m - 1;
j = m;
} else { // 找到目标元素,返回其索引
return m;
}