## Complexity and Asymptotes

A friend pointed out to me that the statement that the final divide-and-conquer Fibonacci algorithm from the previous post could run in $O(\log N)$ time was a bit misleading. The objection was that I had assumed that the matrix multiplication would not depend on $N$, but that this in fact not the case at all. Namely, if the numbers being added get large, neither the addition nor the multiplication operations of the matrix multiplication cannot be assumed to be constant.

In fact, the earlier analysis of Fibonacci indicated that the value of $\text{fib}(N)$ could be bounded above and below by $2^{N/2} < \text{fib}(N) < 2^{N}$, so each of the roughly $\log N$ matrix multiplication amounts to 10 multiplication operations and 4 addition operations, which we can bound above as the addition or multiplication on order-of- $N$ -bit numbers. By contrast, the iterative algorithm, which I had claimed was linear-time $O(N)$, requires only one addition operation at each step, resulting in an order of $N$ addition operations on order-of- $N$ -bit numbers.

If we consider the complexity of the arithmetic operations, the addition of two $N$ bit numbers can be completed in $O(N)$ time, so the revised complexity of the iterative algorithm becomes $O(N^2)$.

For the divide-and-conquer algorithm, we have to consider the complexity of both multiplication operations and addition operations. Suppose we opt for the Schönhage–Strassen algorithm, which has complexity $O(N \log N \log \log N)$. In this case, the multiplication operations dominate the addition ones, and the revised overall complexity of the algorithm becomes $O(N \log^2 N \log \log N)$, which is still faster than the $O(N^2)$ iterative algorithm asymptotically. However, I wonder if there is an intermediate region before that asymptotic behavior takes effect in which the iterative approach completes faster.