CSC323 2010S Software Design

Beautiful Code: The Most Beautiful Code I Never Wrote

Bentley, Jon (2007). The Most Beautiful Code I Never Wrote. Chapter 3 (pp. 29-40) in Oram & Wilson (eds), Beautiful Code. Sebastapol, CA: O'Reilly.

Generally speaking, where is the line between useful conciseness, which is good, and "clever" code, which is (from what I've read in other sources) bad?

It's clearly not a hard line.

In Bentley's Quicksort, he selects the pivot randomly, where in most other implementations I have seen, the quicksort is selected by comparing three values (low, mid, high). Other than more concise code and no need to compare the three values, is there any significant difference in performance between the two implementations? Why would someone prefer one method over the other?

I'm used to Bentley's approach, so we'll talk a little bit about the two alternatives.

Could we review why/how we jumped from example 3-6 to 3-7 on page 33?


I am not quite sure I understand the step from example 3-7 to the one with dynamic programming. Can we go over that?


Can you explain some of the mathematical representations of Quicksort (pg 37) in more layperson-friendly terms? I understand that the author is just reiterating previous information in a different way, but it would be nice to better comprehend the underlying mathematics.


How does viewing software as a "soap bubble" affect how computer scientists work? Is this view shared by the majority?

Wouldn't it be better testing practice to test quicksort with actual arrays of random length instead of creating programs that count the number of calculations?

We're not testing Quicksort. We're analyzing its running time. Presumably, in example 3-2, the assumption is that we generate the array randomly before calling the code.

Is concise code actually "faster" than code that is more verbose and easy to read? I was under the impression that if two code segments were functionally the same, but expressed differently, the compiler/interpreter would optimize them both in such a way that their performance was equivalent. If that is true, is the author's ideal of "conciseness" something that is always desirable? If additional verbosity would lead to more easily maintainable code, it seems that it would not be. Is this contradiction an artifact of taking the author's point too literally?

It depends on what you mean by functionally the same. But yes, if the two follow the same steps, they should end up being more or less the same compiled code. The difference here is that, while computing the same result, the different chunks of code are using different processes.

Also, the author conjectures that "data structures are frozen algorithms," citing binary search trees as frozen Quicksort. What other data structures might this sort of analogy apply to?

Jon Bentley throws out a lot of different runtime and space computations for the experimental analysis of Quicksort. What is a good runtime and space size for a program of this type?

You'd like your tests/analysis programs to be relatively fast. As a computer scientist, one of the first questions you ask yourself about any algorithm should be Can I make this more efficient?

When counting comparisons, why does he feel the need to reduce the code to a skeletal form? Although I doubt the removed "swap" function was computationally intensive, if it had been it could have affected the analysis of each comparison.

Sorry, this one didn't make sense to me. He's analyzing abstractly, rather than in terms of wall-clock time.

This chapter's underlying concept made a lot of sense to me, since good writing is very much about what you leave out (whether words or plot points). Is code runtime more often analyzed with worst-case scenarios or average-case scenarios (or does it vary depending on the kind of code)?

Worst-case analyses are usually easier to do. Average-case analyses are more complicated for a number of reasons, but can be important to do (as in this case).

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Thu Mar 4 13:13:19 2010.
The source to the document was last modified on Thu Mar 4 13:13:17 2010.
This document may be found at

You may wish to validate this document's HTML ; Valid CSS! ; Creative Commons License

Samuel A. Rebelsky,

Copyright © 2010 Samuel A. Rebelsky. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. To view a copy of this license, visit or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA.