Programming Languages (CS302 2006S)

Algorithm = Logic + Control

Comments on:

Kowalski, R. (1979). Algorithm = Logic + Control. Communications of the ACM 22 (7), pp. 424-436.

Comments from: Peter Brown, Michael Claveria, Davis Hart, Alex Leach, Angeline Namai, Mark Nettling, Dimitar Tasev, Dave Ventresca.

No comments from: Brad Miller (x).


General Questions

How does the split command work and what does it do?

split isn't a command, it's a prediate. split(x, z, x') is another way to write x union { z } = x'.

The thesis in the paper is that the logic component needs to be specified before the control component. I assume this means that the two components are not independent of each other. What goes wrong when the control component is specified first?

We usually lose efficiency or add complexity (or both).

In the section entitled "The Procedural Interpretation of Horn Clauses", I was confused as to why the statements would have subscripts from 1 to n, and then state that n>0 (n>=0). For example, " B<-- A_1,...,A_n n>=0". Are we supposed to assume if n=0, then there are no A's?

Yes, if n=0, then there are no A's.

I just couldn't figure out one thing. When the author is referring to mixing data with procedures, what does he mean?

He means that the structure of the data is apparant from the algorithm (e.g., using cons and cdr).

One of the biggest questions I had about the reading is what a Horn clause is and how it relates to the broader discussion.

We'll go over it in class, but Horn clauses are, in essence, a simpler way to write predicate definitions. You can have at most one consequent per clause, and the precedent is always an and of atomic formulae.

Kowalski claims that programmers combine data structures and procedures in order to improve run-time efficiency (430). However, he mentions on page 431 that the separation of data structures from procedures makes it possible to clearly distinguish the "what" from the "how," and therefore makes it easier for the programmer to prove correctness. Wouldn't separation also decrease the complexity of the program, and consequently be more in favor of efficiency than combination?

Not necessarily. Think about choosing a list vs. an array for an algorithm. Which would you use? It depends, in part, on knowledge of the algorithm. We can also phrase it the other way: The quicksort we write for arrays is very different than the Quick sort we write for lists.

The author claims that "the efficiency of an algorithm can be improved . . . either by improving the logic component or by leaving the logic component unchanged and improving the control over its use" (429). This seems to recall the declarative/imperative contrast, since the first approach focuses more on logic/problem while the other focuses on control/solution. I also understand the statement to mean that logic and control have equal influence on efficiency, so, if I am correct in my assumption, why are imperative languages still more efficient than declarative ones?

Well, most logic languages don't permit programmers to affect the control, and imperative languages do. Some improvements can be done by changing the logic (e.g., from the general sort to Quick sort) while others do require a change to control.

For Class

What is the difference between bottom up and top down control and why is top down control usually more efficient?

Top-down control is like recursion - you expand clauses. Bottom-up is more like iteration - you build new knowledge from old. It is easier to ensure that you are making progress to a goal using top-down control.

The top down interpretation of Horn clauses differs from procedure invocation in conventional programming languages. What does it mean that the input-output arguments are not fixed?

Both input and output are a form of substitution to get clauses to match facts. In different contexts, different substitutions may be done.

Can we go over an algorithm/function in Scheme or C and break it down into the logic component and the control component and then discuss possible ways to make it better by changing the control component? I think that a concrete example of this method would be beneficial.

His Fibonacci and graph-search examples seem good to me, but I'll think about it. Note also that he doesn't want you to start with an algorithm and work backwards to logic and control, he wants you to start with logical specification and move forward.

I had some trouble understanding the Two Analyses of Path-Finding Algorithms section. I think I'm having some sort of mental block that is inhibiting me from understanding the notation on the control component of the second strategy.

We'll go over it.

Short Comments

A rather comprehensive reading. It helped me understand the functionality of Prolog-like languages.

Everytime I see predicate logic, the notation is different. I find this really frustrating. Even so, the notation here is fairly simple and easy to comprehend.

Remember, Mathematics in invariant of notation.

One thing I didn't like about this reading is that it argues at one point for separating data structures and procedures, acknowledges that programmers sacrifice this ideal for run-time efficiency, but doesn't really go into macroexpansion. I would have liked to see more about it.

Macros are not really the focus of his paper (or even close). If you care about them, you could make them your presentation topic. ` Statements

The message of this seems to be describe the goals of your program, then determine the best way to achieve those goals. I liken the logic part of an algorithm to a HLD (high-level design) of a project which states the aim of the project and its sub-parts. The control seems to be the programmer's discretion of how to meet the goal with the greatest level of efficiency. Although I found the Horn clauses to be confusing, I did notice the fact that Horn procedure calls may be executed in any order - increasing the possibilities for improved efficiency - and this rule's likeness toprocedure calls of R5RS. How well does this approach scale? The cost of effectively separating logic from control on a large scale seems quite difficult. However, I think he gives an option to those attempting such a feat by allowing data-structures to be defined in the logic component thereby allowing the high-level logic designer make some real choices about how the algorithm will work. In a large project, the data-structures may provide cohesion between various complementary or subordinate components.

I enjoyed reading this paper. It made me think about organization and program/algorithm design alot. Are there other papers on this subject you'd recommend, Sam?

You might enjoy the Dijkstra papers on structured programming. You might enjoy re-reading Gries.

I found the rest of the reading fairly interesting, although a little hard to understand at points. I did think it was interesting that the author spent a lot of time talking about how they're should be a break in languages between logic and control, I feel as though this difference already exists in modern language. Rather than one language taking part in both ingredients, however, different languages attack various sides of the issue and as such, the ultimate break down is a personal one, in which the programmer gets to decide whether they care more about the logic of the control.

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Wed May 10 09:03:10 2006.
The source to the document was last modified on Mon Apr 17 09:51:46 2006.
This document may be found at http://www.cs.grinnell.edu/~rebelsky/Courses/CS302/2006S/Readings/alc.html.

You may wish to validate this document's HTML ; Valid CSS! ; Check with Bobby

Samuel A. Rebelsky, rebelsky@grinnell.edu