Programming Languages (CS302 2006S)

Prolog

Comments on:

Davis, R.E. (1985). Logic Programming and Prolog: A Tutorial. IEEE Software 2 (5), 53-62.

Comments from: Peter Brown, Michael Claveria (*), Davis Hart, Alex Leach, Brad Miller, Angeline Namai, Mark Nettling, Dimitar Tasev, Dave Ventresca.


General Questions

I understand that for being a relational programming language ProLog needs to keep a relational database. For the sake of reducing memory usage, the database is kept to a minimum. Does this not affect the execution time significantly? Is there a roundabout way of reducing the number of steps required for the evaluation of a single statement?

Um ... Prolog does not need to keep a relational database (it is, however, a way that some folks implement relational databases). Keeping the rules minimal is a way to ensure that the program is clearer. You are correct that there is a time-space tradeoff, and it is up to the programmer how to deal with that tradeoff.

I thought the section on the need for introspection was interesting, however, it seemed to me that all modern languages have a form of introspection, and I was wondering if you knew of any languages which did not support such a technique.

Arguably, C does not support introspection. For example, given a pointer, you can't even determine what kind of thing it points to. (Others might argue that C supports an incredibly pure form of introspection, in that it gives you access to the underlying representation. They would, however, be wrong.) Java's introspection is also limited in that you cannot introspect code.

Also, the author mentions the need for introspection. How would introspection work with Prolog or other logic programming where the code is comprised of assertions and conditions rather than functions or objects?

You could make assertions about predicates.

Could you please explain negation by failure?

It's difficult to prove that a statement does not hold. So, in Prolog, to prove not(X), you try to prove X and, if you fail, treats not(X) as true.

Is there a type of programmer that focuses on how machines execute a definition (in contrast to a logic programmer)?

Certainly. We might call them hackers. Consider the legendary piece, The Story of Mel. (Interestingly, althogh that version suggests that the story is real, a separate piece calls it fictional.)

Can separation of concerns ever be detrimental to programming in logic?

I think that Kowalski suggests that it is reasonable to consider both the logic and the control and even, at times, to think about their relationships.

I don't understand the definition of an unifier as a substitution that when applied to both the call and clause head, makes them syntatically identical. What does this look like in code form?

Suppose you have goals of foo(alpha,X) and baz(X) and a fact of foo(Y,beta). The unifier would substitute alpha for Y and beta for X, thereby proving the first goal. The prover would then have to prove baz(beta).

What is the Backward-chaining Horn clause used for?

The clauses aren't backward-chaining; the resolution (proof) procedure is. In essence, it permits us to try potentially useless proofs and, when they fail, back out of those attempts.

Could you explain why cut is dangerous?

Because it inserts control into what should be pure logic.

The paper mentions that "enforcing commitment to computational choices that cannot be reversed later" is a way that Prolog loses the flexibility of logic programs. Cut is an example of such a way. However, why would using a cut in a program make it less flexible?

Programmers that use cut often intend a particular use of their predicates (in some cases, to answer specific questions, like What is the sorted version of this list?; in others, to answer more general questions, like Are there any great grandparents?). When cut is used, it makes it difficult to use the predicates to answer other kinds of questions.

I found the invertability of logic programing to be really neat. The article though mentions that Prolog does not support this fully (factorial example) and I was wondering if there was a language that did and how one would rewrite the factorial program to support invertability. I am also curious as to how invertability would be implemented? Would one use something similar to overloading where what process is called depends on the given's or is there a more elegant solution and im thinking to much like a C or Scheme programer?

You would probably need a more clever "proof" algorithm, although I suppose that the pure version of that is probably equiv. to the halting problem. For the particular case of the factorial example, you might be able to do so using your own representation of numbers. For example, suppose we represent numbers as lists of 1's. Here's a definition of plus

add([],X,X).
add([1|X],Y,[1|Z]) :- add(X,Y,Z).

From the little I understood, one major shortcoming of the logic programming languages is that they are first order, and according to the author, a possible solution is to combine relational programming with functional programming. However, Davis provides a slightly different argument in favor of higher-order programming than the more traditional ones we covered in class. While we focused on reusability, conciseness of code, and elegant abstraction, she emphasizes introspection. I somewhat understand her point, but I am not sure I can clearly explain to another CS student how the apply function in Scheme, for example, enables "[the system] to reason about its own problem-solving techniques" (501). Do you mind clarifying this point? Is she talking about the same kind of introspection that we saw during the class on Java reflection, because I am not sure I see the connection?

Java reflection is only partial introspection. She's talking more about the ability of Scheme (and Scheme-like languages) to rewrite its own interpreter on the fly and then to execute the program using the new interpreter.

Comments and Statements

From the the readings for Wednesday and the last class, I feel the same way about relational languages as I did about LISP: namely that the users of these languages will defend their ease and strength to the death; however, the languages have never gone mainstream, primarily due to a perceived lack of control over the inner workings of the languages.

The modern Prolog community seems smaller and less vocal than the modern Scheme/Functional community. I also believe that once you understand Scheme, you do, in fact, have control over inner workings. I find it interesting that some of these articles focus on the need to support massively parallel computer systems, but that modern computing has, in general, not really moved in that direction.

The idea of pure logic programming, particularly that one can answer a variety of questions with the same set of assertions (i.e. grandparent(bob,X) or grandparent(X,Y)), is very slick. Prolog seems to take a step away from the purity by introducing logic control mechanisms like the cut operator and by fixing the computation order of the conditions.

Prolog is actually quite fascinating once I understood the syntactical techniques used to solve problems. I like the idea of being able to describe the solution to a problem without having to come up with the exact procedure to follow to solve the problem. It seems analagous to solving a mathematics problem by handing it the proper equations and allowing the problem to solve itself instead of manually writing out each step of each equation. It seems like this would make programming tasks more efficient, allowing the programmer to think about how to solve a problem and not waste time thinking about each step the solution needs to take.

Unfortunately, not all programmers think well logically. And, as both texts suggested, sometimes a pure logical specification is inefficient.

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Wed May 10 09:03:19 2006.
The source to the document was last modified on Wed Apr 19 09:53:16 2006.
This document may be found at http://www.cs.grinnell.edu/~rebelsky/Courses/CS302/2006S/Readings/prolog.html.

You may wish to validate this document's HTML ; Valid CSS! ; Check with Bobby

Samuel A. Rebelsky, rebelsky@grinnell.edu