Programming Languages (CS302 2006S)

Notes on Continuations Readings

Comments on:

Graham, P. (1993). Scheme Continuations. Section 20.1 of On Lisp, pp. 258-266. Prentice-Hall.

Kelsey, R., Clinger, W., and Rees, J. (Eds). (1998). (call-with-current-continuation proc). In Section 6 (Standard Procedures) of Revised5 Report on the Algorithmic Language Scheme.

Comments from: Peter Brown, Michael Claveria, Davis Hart, Angeline Namai (*), Dave Ventresca.

No Comments from: Alex Leach (x), Brad Miller (x), Mark Nettling (x), Dimitar Tasev (x).

The idea of continuation seems translucent in it's meaning. Perhaps I'm not understanding it in full, but it does not appear to me that it is a unique characteristic of Scheme. I think that it would be helpful to see a few examples of LISP and Scheme next to each other to fully illustrate the differences between the two in terms of continuation. I do feel that I missed something in the readings, because I don't feel puzzled by either of them. Maybe that's just me doubting myself, but I doubt it.

We don't really need to compare continuations in the two; we just need to make sure you understand their power. Their power comes in their use, so we'll consider a variety of applications. (Some with code, some without.)

After reading both Paul Graham's document and R5RS, I had gained no new knowledge. That is not to say that I already knew and understood continuations but rather that the readings were confusing to say the least. I did a quick google search and came across an article on 'SchemeWiki' ( which was much clearer and helped me gain an understanding of the concept. However, I still don't understand how one can save the state of computation and return to that point later on, particularly with respect to Web programming. I'd also like to see other real world examples of this concept.

We will look at the issue with regards to Web programming briefly in the first class on continuations and, optionally, in more depth in the second class.

I guess, once again, I don't understand the full power of continuations. It seems to me that much of the usefulness can be replicated through temporary variables, loop control statements and recursion. Perhaps that list of things itself is the reason for continuations (to simplify all of that), but it seems to be an additional level of abstraction that is beyond the rest of the code.

There are things that you can do easily with continuations that are more difficult to do with the others, including exiting nested control structures, coroutining, and exceptions (which I guess is really just exiting nested control structures in a different way).

While continuations may look more simple, I predict Hoare and Wirth would be opposed to it, as they like having a consistent level of abstraction in their languages.

One might claim that continuations are consistent with the general abstraction level of functional programming - everything is a function.

However, you are correct that, in some ways, continuations are a lower-level operation than the other functional operations.

In (call-with-current-continuation proc), why must proc be a procedure of only one argument?

Because it receives only one argument, the current continuation.

So call-with-current-continuation proc essentially replaces a continuation with the continuation in effect when the escape procedure was created. Is the catch construct in the revised report on the Algorithmic Language Scheme the same one that we learned as the try/catch statements in Java?

It seems like call-with-current-continuation is essentially a try/catch block with the catch having the condition that a different procedure is run given a certain state.

It's a little more general than that, but you seem to have the basic idea.

What is the setq function in LISP?


Continuations are very important in maintaining one's place in a tree or list. How is this idea used in the implementation of Prolog?

I'm not sure.

I find the "saving our place" idea in the Graham's paper remarkably similar to the "running total" idea in tail recursion, and I therefore understand the discussion on p. 265 to mean that a continuation is a form of tail recursion that allows multiple instances of the same function to run simultaneously. I also conclude that the primary application of continuations is in non-deterministic computations, since Graham's examination of the depth-first search algorithm implies that a continuation operates like a non-deterministic automaton. However, doesn't a continuation serve the same purpose as a thread, in the sense that they both support the parallel running of instances? I may have misread some of the details, but I am not sure why Graham would go into such an elaborate discussion if continuations did not have some unique feature.

Continuations permit co-routining, which is a form of threading. However, as we'll see in class, they permit much more. Continuations, in themselves, are not particularly non-deterministic.

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Wed May 10 09:03:13 2006.
The source to the document was last modified on Wed Feb 15 09:38:36 2006.
This document may be found at

You may wish to validate this document's HTML ; Valid CSS! ; Check with Bobby

Samuel A. Rebelsky,