Evolution of Technology (TEC 154 2014S) : Readings
Norman, Donald A. (2013). The Design of Everyday Things, Revised Edition. Basic Books.
Chapter 1: The Psychopathology of Everyday Things. Chapter 2. The Psychology of Everyday Actions.
But what is it about the field of human-centered design that makes it distinct from the process humans have used in the past to make technologies? As an example, could stone tools be considered "human-centered design?" [Very good. Contextualizes the reading well, and asks us to think more deeply about a concept in a clever way. Of course, Whittaker did suggest to us that the uses of a stone tool are not always obvious, and that we learn about some by trying them in different situations.]
In the summary within the preface of the book- Norman talks about affordances defining what actions are possible (xv) but within the chapter summary Norman states that some affordances are perceivable and that makes them a signifier (19). What allows for an affordance to be a signifier? [Good]
Norman points to misleading signifiers being "accidental or purposeful," are we able to determine when they fall under either category (18)? [Good. Asking us to come up with some examples of each kind of signifier might be good.]
In chapter 2, Norman lists out “The Seven Stages of Action: Seven Fundamental Design Principles” claiming that they are created in the name of feedforward. If the conceptual model is used “to create a good conceptual model of the system, leading to understanding and a feeling of control (72).” Does this insinuate that after every feedback (so before every feedforward) they create a new model that adapts based on the feedback? Norman fails to explain whether or not they adapt or recreate…. [Vague. Who is "they"? The users? The designers? The items? And "insinuate" is a really strange word to use.]
Norman states on page 6 that, "the problem with the designs of most engineers is that they are too logical. We have to accept human behavior the way it is, not the way we would wish it to be." Should engineers be expected to assume nothing about society during the design process? Is this a little unrealistic? Doesn't designing technology for social norms make it more efficient in the long run? [Confusing. It sounds like you're saying the same things as Norman - that we should design for how people behave, not for how we'd like them to behave.]
Norman suggests that designers should be more aware to human nature and the psychology of a certain technology. Does this suggest that we as the consumer should not trust the capability of a given designer? Should we assume all designers have yet to provide society with a human nature-driven product? [Fair. Phrasing is awkward - "a certain technology" implies that you're talking about one technology, but you don't specify which one. And there's a huge leap between "should be more aware" and not trusting capability, and between the implication "should be more aware" and the suggestion that there are no products that take human nature into account.]
What are some other fields that the concepts mentioned in Design of Everyday Things can be applied to? For example, affordances have been adopted by level and environmental designers in the game industry. [Mixed. It's pretty clear that these concepts can be applied in any design field. It would be more useful to ask what fields we've seen them applied to.]
Norman spends 3 pages (118-120) to explore the various alternatives to the design of the projector remote as an example of cultural differences but never explicitly noted how Chinese and Japanese texts are read (likely sources of his equipment): top to bottom. Is it possible for someone to create "well-designed" products for other cultures, given that the designer might be as unaware of the cultural idiosyncrasies as the natives themselves? [Mixed. A good question, but we haven't reached chapter 3 yet.]
How much does psychological research (not directly pertaining to technology) help shape the path and kind of technologies develop? Does technology rely heavily on observable consequences of specific technologies instead of human tendencies illuminated by psychology? [Mixed. A bit broad, and may not be questions we can answer.]
In chapter one Norman discusses design of controls for a technology. He says that controls should be visible and that there really should not be more actions possible than controls but this seems unrealistic. The computer key board is an example, every key does something and combinations of keys do something else entirely. Does the class think that the number of controls and the number of actions possible should always be equal? [Mixed. Asking about the relationship between visible controls and number of features is useful, but it feels like the claim about Norman is overstated. He does not ask so much for 1:1 between controls and actions, but more that the availability of options be clear.]
In chapter 2 Norman discusses how designers of technology should consider errors in their technology. My question is whether the class thinks this is too much responsibility to put on to the designers of the technology and not enough put on the users? [Good. We need the occasional "do you accept the author's thesis?" question.]
Can you think of any other examples in which bad technological designs have led to disastrous events (other than the door example Norman explains on pages 59-60)? If so, what could be done to fix such issues? Norman also makes the argument that most problems with technology are the fault of the designer rather than the user. Who do you think is to blame in these kind of circumstances? [Mixed. At least in my copy of the book, pages 59-60 seem to be about thermostat. The only disastrous event I can recall from Norman is Three-Mile Island (and that's very disastrous). It also seems Norman answers your first question (and, the second). But I'll accept this as a "What relative blame should be put on designers and users in these circumstances?"]
In chapter 2, Norman talks about the expectations we associate with the outcomes of certain actions, and how these should be an important consideration in the design process. What is an example of a technology where we expect it to do one thing, but it does something else? That is, what is an example of a poorly designed technology where our expectations are not met? What did we expect and why? How could the technology be redesigned to avoid such confusion in the future?
According to Norman, why do we tend to point out “bad" design but take “good” design for granted? [Mixed. Asks us to make sure that we understand a key point of the book. But it's a fairly straightforward point, so it would have been nicer to see you focus on something a bit deeper.]
Do you think it was necessary for Norman to include a preface about the changes he made to the new edition of his book? [Good. Asks about the design of the book.]
On page 5 Norman discusses the limited capacity of machinery, mainly that they will continue to exhibit illogical behavior if there is a small malfunction. Does this relate to the fear of robotics by authors like Joy? Does this fear stem from the possibility of machinery performing an illogical behavior over and over on a mass scale? [Mixed. I'm not sure that the behaviors are "illogical" so much as "unexpected". I'd say that Joy's concerns are clearly broader, but it's probably worth discussing.]
On page 5 Norman states, “…the machine does what it is told, no matter how insensible and illogical.” This statement makes me think about Joy’s article which we read last week. If this is how machines work doesn't this provide support for Joy’s skepticism about the future of technology? This is extremely evident in the example of robots going bad due to their lack of knowledge of right and wrong. Does Norman’s description of machines affect your opinion(s) concerning the future of technology? [Mixed. Some parts I'm not sure about, such as the relationship between "the machine does what it's told" and "their lack of knowledge of right and wrong". And the last question is mostly yes/no, but I'm interpreting it more as "How does Norman's description affect …?", which is good.]
Copyright (c) 2014 Samuel A. Rebelsky.
This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit
or send a letter to Creative Commons, 543 Howard Street, 5th Floor,
San Francisco, California, 94105, USA.