Programming Languages (CS302 2006S)

On Understanding Types, Data Abstraction, and Polymorphism

Comments on:

Cardelli, L. and Wegner, P. (1985). On Understanding Types, Data Abstraction, and Polymorphism. ACM Computing Surveys 17(4), December 1985.

Comments from: Peter Brown, Michael Claveria, Davis Hart, Alex Leach, Brad Miller, Mark Nettling, Angeline Namai, Dave Ventresca.

No comments from:

Dimitar Tasev (x).


Questions Answered Here

Why does it say on 476 that the + operator could have four overloaded meanings? Aren't 3.0 + 4 and 3 + 4.0 the same?

In that first interpretation, in which there is no coercion, there are four kinds of plus, which me might represent as having types

Why are string and bytes primitive modes in Algol 68? Wouldn't char and bits be enough?

Enough and what language designers consider appropriate need not match. Sometimes there's a benefit to adding other types. An advantage to making string primitive is that the programmer doesn't need to worry about the underlying representation (e.g., is a string simply an array of chars, or is it a linked list of chars, or simply a contiguous array of memory that ends with a 0 byte, or ...).

In section 1.5, are the sublanguages explicitly built into languages, or is this more of a theoretical mechanism for representing a large set of usable types?

Well, most modern languages do include a BNF notation for their grammmar and a type system, so some aspects of the system are explicitly built in. However, I would say that few are built in with the precision and analysis that Cardelli and Wegner want.

The distinction that the authors make between strong and static typing is still unclear to me. Could you elaborate further on what they mean by "all expressions are guaranteed to be type consistent although the type itself may be statically unknown"? (474) Don't we need to know the type before we can determine consistency?

No, there are certainly cases in which we can determine consistency without knowing types. The following code is type consistent, provided a and b have the same types.

tmp := a;
a := b;
b := tmp;

When we discussed Java earlier in the semester, I believe we concluded that Java is statically typed. You also mentioned that some programmers do not consider Java to be strongly typed because type checking is done at run-time rather than at compile-time. However, the authors suggest that, in a statically typed language, "all variables and expressions are bound to a type at compile-time" (474). Wikipedia also gives a similar definition, but to my surprise, still classifies Java as statically typed. I would expect that Wikipedia would not classify Java as such, given the run-time type checking. My question therefore is whether Java is really a statically typed language. Moreover, since Cardelli and Wegner state on page 474 that {statically typed languages} is a proper subset of {strongly typed languages}, if Java is in fact statically typed, wouldn't it also be strongly typed? It seems to me that there are conflicting definitions of static and strong typing among computer scientists.

Java does its type-checking at compile time, not rune time, so it is statically typed. Java does permit some type coercions and does permit programmers to check for types and run time using instanceof, which some might say makes it less strong. So ... it is statically typed in that it does the type checking at compile time, but it does not verify the type safety of all operations, so you might say that it not really statically typed.

If types are a set of clothes, then is there any reason to remove types and operate on naked representations?

Efficiency is one good reason. For example, you might find it convenient to do multiplication by bit-shifting. Similarly, you might find it useful to pack a lot of Booleans into an int (as C programmers usually do). Although it's not strictly operating on the naked representation, it is useful for the programmer to know what the limitations of that representation are (which usually means knowing the representation: signed-magnitude integers, IEEE 64-bit reals, or whatever).

What is a complete partial order?

Context would help. A partial order is an order in which not all values are related. In the case of the type lattice, we might say that there's no relationship between a function type and a record type. A complete partial order also has a smallest element (one that is comparable to every other element and is smaller than all of them). [I got that last bit of info from Wikipedia.]

I was left wondering why Java, a polymorphic language, has overloading, a weaker typing rule.

Because C has overloading (and coercion and ...), and one of the design goals of Java was to be a natural evolution for C and C++ programmers.


Questions To Be Answered In Class

Could you explain inclusion polymorphism and how it works?

Yes, in class.

What are some examples of technical properties that must be obeyed so that subsets of a universe V (ideals) are legal types?

We'll consider this issue in class.


Questions Sam Can't Completely Answer

Could you give us examples of other models, besides the "ideal model" described on 491?

Cardelli and Wegner mention retract models, but I have no idea beyond that. I like latices.

Although types arise naturally from untyped universes, isn't there a danger in thinking of untyped universes as if they were typed? What are the advantages and disadvantages of doing so? How were some untyped universes, like naive set theory, found to be inconsistent and what were there inconsistancies?

Sorry, that math is beyond me.

What languages are strictly monomorphic and how do these languages differ from those that have a way of relaxing strict monomorphism like Pascal and Ada?

Sorry, I have no experience with such languages.

Lastly, and I only just started thinking about it and mention it here just because it seems to fit, I wonder if there are any languages which have at all successfully implemented a language which uses both static and dynamic types. That is, create a language, which allows dynamic typing if no type is specified at declaration/first use, but will also type check at compile time those variables that have been specified with a type.

Well, most newer languages that do compile-time typing and that permit programmers to skip types also do some compile-time inference (due, in part, to the model in this paper, but more to earlier work by Scott and Strachey). For example, if you see x being added to something, you can assume that x is a number. But I don't know oof any languages that meet your general strategy, or the modified ne.

In practice, would integers be coerced to reals, chancing a loss in precision, especially went the result would have to be coerced to an integer again?

Most folks don't think about the loss of precision in coercing integers to reals (after all, we're taught that integers are a subset of reals). However, I don't know of a language that coerces all integers to reals before doing computation. (That doesn't mean that no such language exists; simply that I don't know of one.)


Comments

This reading was fairly straight forward, so I don't have many questions; however, it was a good overview of polymorphism (both kinds and why they are important when considering types in a language), the history of language typing, the key distinctions of static and strong typing and strategies of designing types. The analogy of types as "armor" protecting the underlying data I thought corresponded to our in-class-assestment of what a type is (at least partially).

I found the descriptions of the various types of polymorphism to be very helpfull in understanding how polymorphism works and id implemented. It had the most consise definition I've seen yet of objecot oriented programing languages (object oriented = data abstractions + object types + type inheritance).

I found the reading relatively easy to understand, although I did get a little confused during the section on the hierarchy of polymorphisms, specifically the difference between true polymorphism and not true polymorphisms. Section three talking about the view of types as subsets of the set of all values really made a lot of sense to me. I think that I had come to a similar understanding myself, but seeing it explained helped me get a much better grasp of things.

The article began with an interesting narrative of the “evolution” of types. By highlighting the fact that programmers will group data in a single-type language or attempt similar tricks with statically typed languages, it explained/foreshadowed Tate’s complaint that programmers of strongly typed languages like Java sometimes demand concessions which allow them to circumvent the strictness of the type system. I had a hard time following some of the definitional stuff.

This document says that we should strive for strong and static typing whenever possible. Although I believe strong typing is a good thing, I thing that dynamic typing can have uses that static typing prevents. Also, the clarification of the confusion between coercion and overloading was helpful as the example was a common occurence. Overall, the writing style is simple yet explanatory, and helpful to the understanding of typing in programming. It is also very thorough in its explanations.

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Wed May 10 09:03:12 2006.
The source to the document was last modified on Fri Apr 7 08:08:01 2006.
This document may be found at http://www.cs.grinnell.edu/~rebelsky/Courses/CS302/2006S/Readings/cardelli.html.

You may wish to validate this document's HTML ; Valid CSS! ; Check with Bobby

Samuel A. Rebelsky, rebelsky@grinnell.edu