<h1id="22-iteration-and-recursion">2.2 Iteration and recursion<aclass="headerlink"href="#22-iteration-and-recursion"title="Permanent link">¶</a></h1>
<p>In algorithms, the repeated execution of a task is quite common and is closely related to the analysis of complexity. Therefore, before delving into the concepts of time complexity and space complexity, let's first explore how to implement repetitive tasks in programming. This involves understanding two fundamental programming control structures: iteration and recursion.</p>
<p>"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.</p>
<p><u>Iteration</u> is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.</p>
<h3id="1-for-loops">1. For loops<aclass="headerlink"href="#1-for-loops"title="Permanent link">¶</a></h3>
<p>The <code>for</code> loop is one of the most common forms of iteration, and <strong>it's particularly suitable when the number of iterations is known in advance</strong>.</p>
<p>The following function uses a <code>for</code> loop to perform a summation of <spanclass="arithmatex">\(1 + 2 + \dots + n\)</span>, with the sum being stored in the variable <code>res</code>. It's important to note that in Python, <code>range(a, b)</code> creates an interval that is inclusive of <code>a</code> but exclusive of <code>b</code>, meaning it iterates over the range from <spanclass="arithmatex">\(a\)</span> up to <spanclass="arithmatex">\(b−1\)</span>.</p>
@ -3928,7 +3928,7 @@
<p><aclass="glightbox"href="../iteration_and_recursion.assets/iteration.png"data-type="image"data-width="100%"data-height="auto"data-desc-position="bottom"><imgalt="Flowchart of the sum function"class="animation-figure"src="../iteration_and_recursion.assets/iteration.png"/></a></p>
<palign="center"> Figure 2-1 Flowchart of the sum function </p>
<p>The number of operations in this summation function is proportional to the size of the input data <spanclass="arithmatex">\(n\)</span>, or in other words, it has a "linear relationship." This "linear relationship" is what time complexity describes. This topic will be discussed in more detail in the next section.</p>
<p>The number of operations in this summation function is proportional to the size of the input data <spanclass="arithmatex">\(n\)</span>, or in other words, it has a linear relationship. <strong>This "linear relationship" is what time complexity describes</strong>. This topic will be discussed in more detail in the next section.</p>
<h3id="2-while-loops">2. While loops<aclass="headerlink"href="#2-while-loops"title="Permanent link">¶</a></h3>
<p>Similar to <code>for</code> loops, <code>while</code> loops are another approach for implementing iteration. In a <code>while</code> loop, the program checks a condition at the beginning of each iteration; if the condition is true, the execution continues, otherwise, the loop ends.</p>
<p>Below we use a <code>while</code> loop to implement the sum <spanclass="arithmatex">\(1 + 2 + \dots + n\)</span>.</p>
<p><strong><code>While</code> loops provide more flexibility than <code>for</code> loops</strong>, especially since they allow for custom initialization and modification of the condition variable at each step.</p>
<p><strong><code>while</code> loops provide more flexibility than <code>for</code> loops</strong>, especially since they allow for custom initialization and modification of the condition variable at each step.</p>
<p>For example, in the following code, the condition variable <spanclass="arithmatex">\(i\)</span> is updated twice each round, which would be inconvenient to implement with a <code>for</code> loop.</p>
<p>In such cases, the number of operations of the function is proportional to <spanclass="arithmatex">\(n^2\)</span>, meaning the algorithm's runtime and the size of the input data <spanclass="arithmatex">\(n\)</span> has a 'quadratic relationship.'</p>
<p>We can further increase the complexity by adding more nested loops, each level of nesting effectively "increasing the dimension," which raises the time complexity to "cubic," "quartic," and so on.</p>
<p>"Recursion" is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:</p>
<p><u>Recursion</u> is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:</p>
<ol>
<li><strong>Calling</strong>: This is where the program repeatedly calls itself, often with progressively smaller or simpler arguments, moving towards the "termination condition."</li>
<li><strong>Returning</strong>: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer.</li>
@ -4840,7 +4840,7 @@
<p>In practice, the depth of recursion allowed by programming languages is usually limited, and excessively deep recursion can lead to stack overflow errors.</p>
<p>Interestingly, <strong>if a function performs its recursive call as the very last step before returning,</strong> it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as "tail recursion."</p>
<p>Interestingly, <strong>if a function performs its recursive call as the very last step before returning,</strong> it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as <u>tail recursion</u>.</p>
<ul>
<li><strong>Regular recursion</strong>: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.</li>
<li><strong>Tail recursion</strong>: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.</li>
<p>Observing the above code, we see that it recursively calls two functions within itself, <strong>meaning that one call generates two branching calls</strong>. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of <spanclass="arithmatex">\(n\)</span>.</p>
<p>Observing the above code, we see that it recursively calls two functions within itself, <strong>meaning that one call generates two branching calls</strong>. As illustrated below, this continuous recursive calling eventually creates a <u>recursion tree</u> with a depth of <spanclass="arithmatex">\(n\)</span>.</p>
<p>On one hand, <strong>it's difficult to eliminate interference from the testing environment</strong>. Hardware configurations can affect algorithm performance. For example, algorithm <code>A</code> might run faster than <code>B</code> on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.</p>
<p>On the other hand, <strong>conducting a full test is very resource-intensive</strong>. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm <code>A</code> might run faster than <code>B</code>, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.</p>
<p>Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."</p>
<p>Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as <u>asymptotic complexity analysis</u>, or simply <u>complexity analysis</u>.</p>
<p>Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. <strong>It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases</strong>. This definition might sound complex, but we can break it down into three key points to understand it better.</p>
<ul>
<li>"Time and space resources" correspond to "time complexity" and "space complexity," respectively.</li>
<li>"Time and space resources" correspond to <u>time complexity</u> and <u>space complexity</u>, respectively.</li>
<li>"As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.</li>
<li>"The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.</li>
<h1id="24-space-complexity">2.4 Space complexity<aclass="headerlink"href="#24-space-complexity"title="Permanent link">¶</a></h1>
<p>"Space complexity" is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".</p>
<p><u>Space complexity</u> is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".</p>
<h2id="241-space-related-to-algorithms">2.4.1 Space related to algorithms<aclass="headerlink"href="#241-space-related-to-algorithms"title="Permanent link">¶</a></h2>
<p>The memory space used by an algorithm during its execution mainly includes the following types.</p>
<li>Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.</li>
<li>Worst-case time complexity is denoted using big O notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations <spanclass="arithmatex">\(T(n)\)</span> as <spanclass="arithmatex">\(n\)</span> approaches infinity.</li>
<li>Worst-case time complexity is denoted using big-<spanclass="arithmatex">\(O\)</span> notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations <spanclass="arithmatex">\(T(n)\)</span> as <spanclass="arithmatex">\(n\)</span> approaches infinity.</li>
<li>Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.</li>
<li>Common time complexities, arranged from low to high, include <spanclass="arithmatex">\(O(1)\)</span>, <spanclass="arithmatex">\(O(\log n)\)</span>, <spanclass="arithmatex">\(O(n)\)</span>, <spanclass="arithmatex">\(O(n \log n)\)</span>, <spanclass="arithmatex">\(O(n^2)\)</span>, <spanclass="arithmatex">\(O(2^n)\)</span>, and <spanclass="arithmatex">\(O(n!)\)</span>, among others.</li>
<li>The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.</li>
@ -3618,7 +3618,7 @@
<p><strong>Q</strong>: Is the space complexity of tail recursion <spanclass="arithmatex">\(O(1)\)</span>?</p>
<p>Theoretically, the space complexity of a tail-recursive function can be optimized to <spanclass="arithmatex">\(O(1)\)</span>. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of <spanclass="arithmatex">\(O(n)\)</span>.</p>
<p><strong>Q</strong>: What is the difference between the terms "function" and "method"?</p>
<p>A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.</p>
<p>A <u>function</u> can be executed independently, with all parameters passed explicitly. A <u>method</u> is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.</p>
<p>Here are some examples from common programming languages:</p>
<ul>
<li>C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages.</li>
<p>Since <spanclass="arithmatex">\(T(n)\)</span> is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as <spanclass="arithmatex">\(O(n)\)</span>. This mathematical notation, known as "big-O notation," represents the "asymptotic upper bound" of the function <spanclass="arithmatex">\(T(n)\)</span>.</p>
<p>Since <spanclass="arithmatex">\(T(n)\)</span> is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as <spanclass="arithmatex">\(O(n)\)</span>. This mathematical notation, known as <u>big-O notation</u>, represents the <u>asymptotic upper bound</u> of the function <spanclass="arithmatex">\(T(n)\)</span>.</p>
<p>In essence, time complexity analysis is about finding the asymptotic upper bound of the "number of operations <spanclass="arithmatex">\(T(n)\)</span>". It has a precise mathematical definition.</p>
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.