July 2019, rev. August 2019

Many algorithms aren't easy to understand. They're designed for machines, not humans. Their metric of success is computational complexity like O(time) and O(space), not cognitive complexity. There's no O(human).

If I want to make a small change to the algorithm will I be able to? Does the algorithm have to be this big? Is the recursion necessary (isn't iteration faster than recursion?) How bad is the recursion? Will I have to trace something backwards? Why can't there be an algorithm that runs forward? Can the algorithm be broken into multiple steps without changing O(time), instead of do it all in one shot? Can I save more data in each step? Does each iteration destroy data? How will I debug it? Can I debug the whole thing in less than 30 minutes? If I add debugging statements in a couple of loops will I figure out how the algorithm works or will I get more than 50 lines of output that are hard for the eyes to parse? Does it reduce the number of thinking steps a human has to do to a minimum? And the cardinal sin: do you explain why each inductive step in the algorithm must exist, and with its current parameters instead of some other parameters, along with all the other options that wouldn't work for such a step?

The answers to these questions can define O(human).

I used to feel embarrassed when I didn't sniff an algorithm quickly, as if I was missing the part of the brain that learns the algorithm. I want to be better in algorithms. Now that I know more about algorithms I see many algorithms could be simpler. As if they're missing the part of the design that justifies the algorithm. I want there to be better algorithms. Some times when I didn't understand an algorithm I either didn't see some step explained clearly, or my testcases found a bug in the algorithm, or the algorithm didn't have working source code. Many times the algorithm was big, ugly, or pointless enough that it couldn't hold my interest.

Here are some examples. The complexity of Paxos compared to the simplicity of Raft (can't consensus be shorter?) The NP-complete graph coloring algorithm for register allocation compared to the simplicity of linear-scan allocation (isn't scanning an array simpler than scanning a graph?) or incrementing a counter — there may also be a hardware only solution. The bug in the SSA paper for building a control flow graph compared to the workaround of using reaching definitions.

So if you're designing a better algorithm, and are also designing it for humans to understand while it's nowhere near O(human) territory, take a step back to revise the design.