## Code Monkey

During my years as a student, I sometimes encountered a disdain in others for writing code. In some cases, the term “code monkey” would get used against someone who enjoyed writing code. Something never quite felt right about that term. Recently, I went to a talk by Donald Knuth, who said something that resonated with me. I am paraphrasing it here: “Some people say that you truly understand something when you can explain it to a child. I’d say that you truly understand something when you can explain it to a computer.”

I took a programming class in high school during which we had to write a program to compute the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, etc. The way to get the next number in the sequence was to add the previous two numbers in the sequence. In my mind, one that had just digested the idea of recursion, this was relatively simple to explain to a computer:

int fib(int N) {
if (N == 1) return 1;
if (N == 2) return 2;
return fib(N - 1) + fib(N - 2);
}

There was only one problem with my explanation: it would take a computer exponential time in $N$ to run the code, denoted as $O(2^N)$. One way to see it is to notice that the number of calls to the function doubles for every $N > 2$, resulting in a binary tree with the minimum depth to a leaf of $\frac{N}{2}$ and a maximum depth of $N$. Another way to see it is to compile the code and use it to try to compute $\text{fib}(50)$ and then wait… and wait… and wait.

The Fibonacci sequence grows exponentially, and a naive recursion effectively counts up one-by-one. Thus, such an algorithm grows exponentially, as well.

I got tired of waiting, and after studying computer science a while longer, I learned how to explain Fibonacci to a computer that would take linear time, i.e. $O(N)$:

int fib(int N) {
int a = 0;
int b = 1;
for (int i = 1; i < N; ++i) {
int temp = b;
b += a;
a = temp;
}
return b;
}

Now I didn’t have to wait for $\text{fib}(50)$, but I would for $\text{fib}(2^{50})$. This improved my understanding of the Fibonacci sequence, but at the time, my understanding was quite limited: I hadn’t taken a linear algebra class yet. Once I had that and some signal processing classes under my belt, I learned that the Fibonacci sequence could be solved in closed-form with a z-transform. Alternatively, one could arrive at the closed-form solution by recognizing that the Fibonacci sequence could be represented as the product of an exponentiated matrix with a vector, and all one had to do was decompose that matrix into its eigenvalues/eigenvectors:

$\left(\begin{array}{c} \text{fib}(N+1) \\ \text{fib}(N) \end{array}\right)$ = $\left(\begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array}\right)^N \left(\begin{array}{c} 1 \\ 0 \end{array}\right)$

There was still a problem, though. The eigenvalues of the Fibonacci matrix are irrational, and the closed-form solution contains irrational terms, so explaining this to a computer could result in floating point errors:

$\text{fib}(N) = \frac{(\frac{1 + \sqrt{5}}{2})^N - (\frac{1 - \sqrt{5}}{2})^N}{\sqrt{5}}$

By this time, I was well versed in MATLAB, and this wonderful vector-matrix notation could be fed right in, but perhaps I didn’t understand what was going on as well as I thought.

Some years went by before I took a closer look at the above expression and noticed something: the computation of the closed-form expression involves two quantities that need to be exponentiated, which can be done using a divide-and-conquer algorithm in $O(\log N)$, i.e. logarithmic time. Is there a way to do this that doesn’t involve floating-point arithmetic?

It turns out the approach is pretty simple: one can apply the same divide-and-conquer technique for exponentiating a number to exponentiating a matrix. The divide step is easy to see when $N$ is even:

$\left(\begin{array}{c} \text{fib}(N+1) \\ \text{fib}(N) \end{array}\right)$ = $\left(\left(\begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array}\right)^2\right)^{N/2} \left(\begin{array}{c} 1 \\ 0 \end{array}\right)$

If N is odd, the same idea holds:

$\left(\begin{array}{c} \text{fib}(N+1) \\ \text{fib}(N) \end{array}\right)$ = $\left(\begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array}\right)\left(\left(\begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array}\right)^2\right)^{(N-1)/2} \left(\begin{array}{c} 1 \\ 0 \end{array}\right)$

Given this, it isn’t too hard to explain to a computer how to compute the Fibonacci sequence in $O(\log N)$ time without resorting to floating-point arithmetic:

int fib(int N) {
Matrix<int> m(1,1,1,0);
Vector<int> v(1,0);
Vector<int> v = fib_helper(N, m, v);
return v[0];
}
Vector fib_helper(int N, Matrix<int> m, Vector<int> v) {
// Base case
if (N == 1) {
// Multiples the matrix and the vector.
return MultiplyMatrixVector(m, v);
}
// Even case
if (N % 2 == 0) {
// Squares the matrix.
return fib_helper(N / 2, SquareMatrix(m), v);
}
// Odd case
return MultiplyMatrixVector(
m, fib_helper(N - 1, m, v));
}

Are there any cases in which programming has helped you understand a concept better? Are there any cases in which understanding a concept has made you a better programmer?

This entry was posted in Programming, Signal Processing. Bookmark the permalink.

### One Response to Code Monkey

1. Pingback: Complexity | Puzzled Over