Programming Languages (CS302 2006S)

A Gentle Introduction to Haskell

Comments on:

Hudak, P., Petersn, J., & Fasel, J.H. (1999, October). A Gentle Introduction to Haskell. Web resource, available at

Comments from: Peter Brown, Michael Claveria, Davis Hart, Angeline Namai, Mark Nettling, Dave Ventresca.

No comments from:

Alex Leach, Brad Miller, Dimitar Tasev,

General Questions

Do Type Synonyms overwrite the names of old types and if so, couldn't this possibly screw up the language system?

Type synonyms just declare new type names; it doesn't change the old types. Haskell does not allow you to redefine a type name (or shouldn't).

How many different languages use nested comments besides Haskell?

Well, C lets you use three kinds of comments. I'm not sure what other languages pay attention to nesting in multi-line comment.

Haskell has a special syntax for arithmetic sequences, but does it also easily support geometric and other types of sequences?

Not really.

Is there an important difference (not counting the # of parameters) between the curried and uncurried add x y vs. add (x,y) example in chapter 3?

It's harder to section the latter version.

The authors mentioned type constructors a number of times and left me confused about the distinction between data constructors as we normally use them (to create and assign objects or variables) and their fancy type equivalents.

The distinction is primilarly that type constructors need not specify the types of their arguments; they build different types depending on the types that their arguments are given.

For example, does the definition for point on pg 5 also partially define its structure by denoting the types of two of the fields? If so, doesn't it blur the line between types and object structures?

The blurring you indicate appears to be intentional and, from my perspective, is similar to that of a polymorphic function. Just as, for a polymorphic function, the type of the result is based on the type of the arguments, so, for a polymorphic constructor, does the type of the constructed object depend on the types of the parameters to the constructor.

I am still confused about curried functions - it seems like they behave similar to a continuation in that (add 1) defines the remaining computation.

No! Currying is simply one way of interpreting a multi-parameter function. Partially applied curried functions (which is what you seem to be discussing) is simply a way of building new functions by supplying some of the parameters, a more natural way of sectioning, as it were. These partially parameterized functions do not include a call stack, which is a key aspect of continuations.

Finally, I think that lazy evaluation could present some interesting debugging problems especially if there is a bug in a rarely executed piece of code. I imagine that the language has a good way of dealing with it, though.

That seems to be a task for the programmer (to exercise all of his or her code), rather than the language.

It is clear to me that the main features of Haskell are lazy/non-strict evaluation, layout, currying and pattern matching. The authors provide an elaborate explanation of these concepts, and on the footnote on p.12, they refer to Haskell as a purely functional language, although they do not explicitly say what comprises a purely functional language. You also briefly mentioned purely functional languages in a previous class, but I am not sure you went into a detailed discussion. Would you mind explaining the notion of a purely functional language, and possibly clarifying the relationship between purely functional languages and lazy evaluation? Is lazy evaluation a feature of all purely functional languages?

A pure functional language is simply one in which functions have no side effects - When you apply the same function to the same parameters, you get the same value.

According to the authors, non-strict evaluation is an advantage, as it ensures that computationally-expensive operations are executed by need, and therefore decreases the likelihood of nonterminating operations. They also claim that lazy evaluation relieves the programmer of concerns about the ordering of assignments. However, I am not sure how the programmer would determine the asymptotic complexity of a lazy program. Wouldn't it be difficult to analyze performance, since it is not apparent which expressions are evaluated, and when and how often they are evaluated? Moreover, how do you debug the program if you are not certain of the order of evaluation?

You are correct that asymptotic analysis and debugging are harder in a lazy language. However, for many algorithms, the number of steps executed lazily is no different than the number of steps executed eagerly. Debugging probably benefits from good unit testing.

The notation for declaring a function seems simplistic enough, although it seems very easy to forget to write the typing bit, since it is seperate. On rereading though, perhaps I misunderstood. Do you have to declare both? Or just one of them?

You need not declare the type of a function; Haskell can determine it. However, it is often safer to declare that type.

Is xs just a sequence of x's?

It depends on the context. But when you see (x:xs), you can assume it's just a list.

If we wanted to define add in such a way that it would accept both ints and floats would it be:

       add :: Integer -> Integer -> Integer || Float -> Float -> Float  ??

Probably not.

Haskell is remarkably similar to Scheme. I am wondering though what the author means when he says that Scheme is a semi-functional language. Could you explain?

Scheme supports sequencing of operations with begin blocks (and the bodies of many forms) and supports side effects (see above). Both aspects make it less than pure.

Questions for Class

Section 4.3 was fairly confusing, could you go over the example they give regarding streams?


Can we go over the Syntax of the fibonacci example in 3.4?



I really liked the fact that the user can define types so simply as tuples though. Haskell has a very short and concise syntax.

I found the article very helpful and I actually thought the section on polymorphism vs overloading helped explain the difference very well.

I am confused about why they even mentioned lambda abstractions at all as they mention and then move on, never to bring up again (it seems a little advanced for such terse treatment).

As they note in the introduction, they assume that their readers are already familiar with the functional paradigm. Lambdas are a natural portions of that paradigm.

I really like the lazy aspect of Haskell, specifically the fact that infinite lists are easy to deal with.

I think the idea of paying attention to whitespace and layout is a pretty stupid idea, that while it makes things more consistently readable, must make actually writing the code much more tedious.

Well ... Python is pretty popular, and it has the same restrictionn. I'll agree with you that meaningful formatting has had many problems over the years, but it may be one of those things that aids the average programmer. (I'm aware of no formal studies.)

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Wed May 10 09:03:15 2006.
The source to the document was last modified on Mon Apr 24 09:22:05 2006.
This document may be found at

You may wish to validate this document's HTML ; Valid CSS! ; Check with Bobby

Samuel A. Rebelsky,