Fundamentals of Computer Science I (CS151.01 2006F)

Usability

This reading is also available in PDF.

Summary: We consider the concept of usability and different approaches for evaluating the usability of computer software or other tools.

Contents:

Usability

How does this thing work? What is that supposed to mean? What am I supposed to do now? There must be a better way to do this.

If you've ever found yourself saying these things while using a Web site, computer software, or some other tool, you may have experienced poor usability. Usability is a property of tools that roughly corresponds to the idea of being user-friendly or easy to use. Jakob Nielsen (2003), a well-known advocate for Web site usability, identifies five components of usability, which we paraphrase below.

All of these aspects of usability are relative to some population of users: For example, a Web site might be very usable for adults, but boring or difficult for kids without the same patience and reading ability.

A camera might be very usable to a trained photographer, but problematic for novices. A voting system might be easy for many adults to use, but full of errors and frustration for the vertically challenged or those with limited vision or dexterity. And, different aspects of usability are most important in different contexts. A system to be used by cashiers at the grocery store must be very efficient, but a Web-based shopping cart must be easy to learn how to use, or frustrated customers will take their business elsewhere.

November 14, 2006 is World Usability Day (Usability Professionals' Association 2006), an event intended to raise awareness of the importance of usability in our everyday lives and in supporting values such as democracy, fairness, and safety. Of course, more usable tools can make our lives more pleasant as well. In honor of World Usability Day, we'll do a laboratory exercise in which you will evaluate the usability of a common household tool.

Evaluating Usability

There are many different ways of evaluating usability, depending on the designers' goals for the tool.

If our focus is on the efficiency of the tool, one approach is to recruit a number of representative users (such as expert or novice photographers), give them some tasks to complete with the tool, and measure how long it takes the users to complete the tasks. The less time, the more efficient the design. Another approach is to use psychological models to predict how long it will take users to complete low-level tasks. For example, one model predicts the amount of time required to enter a command given the exact sequence of keystrokes; another model predicts how long it takes to use a mouse to click on a button or icon, depending on how large and how far away it is. A common goal is for frequently-performed tasks (such as changing the style of some text in a word processor) to be faster than tasks that are performed less often (such as changing the page margins).

If our focus is on satisfaction, then of course we should ask users how they like using the tool. But if we are concerned about learnability or errors, then it is not enough to ask users whether the tool seems easy to learn or whether they think they make many mistakes. People have a difficult time remembering and verbalizing what they have done in the past, and an even harder time predicting what they will do in the future!

As with efficiency, we have both theoretical and experimental approaches to evaluating learnability and errors. In one more theoretically-oriented approach, called heuristic evaluation, usability experts assess the tool with respect to some rules of thumb, or heuristics. Heuristics such as make system status visible and recognition rather than recall (Nielsen, n.d.) are intended to capture some properties of systems that make them easier to learn and less error-prone. Several experts review the tool looking for violations of these heuristics, which are potential usability problems, and rank these violations according to their severity.

Usability testing is a more experimental approach. As in measuring efficiency, we recruit representative users and give them tasks to complete. However, rather than timing the tasks, we observe the users' behavior. In some studies, particularly studies of Web sites, computer software is used to record the sequence of mouse clicks and keystrokes made by the user. Later, usability experts analyze this data for errors and moments of hesitation in order to identify aspects of the user interface that are particularly problematic. In other studies, representative users are asked to think aloud as they perform the tasks. Think-aloud studies are useful for understanding why users hesitate or make mistakes, and therefore are particularly good for understanding the causes of usability problems.

Conducting a Think-Aloud Usability Study

Think-aloud studies are often the most informative and the easiest to plan and perform. A remarkable amount can be learned from conducting think-aloud studies with just 3-5 participants (Nielsen, 2000).

In planning a usability study, we must ask ourselves several questions: What is the tool or system being studied? What criteria will we use to identify representative users? (Is there more than one type of representative user?) How will we recruit representative users? What tasks should the study address? Once a system, representative users, and tasks are identified, it is important to practice the study to work out software bugs, as well as problems with the tasks or the study procedure.

In conducting the usability study, there are several key roles (McCracken and Wolfe, 2004).

After the study is over, the evaluators compile a list of all the usability problems, particularly noting problems encountered by more than one participant. They often rank the severity of the problems, as in a heuristic evaluation, and may make recommendations for prioritizing the problems or how to fix them.

Usability Testing in Context

Designing software and Web sites is an iterative process: We come up with a design, test it to find out what the problems are, fix those problems in a new design, and so forth until we are satisfied (or run out of time or money). Think-aloud studies and other forms of evaluation can be used at many different stages of design:

Usability testing might also be used to support a legal point. Broadly one might find that a government Web site is not accessible to hearing-impaired or visually-impaired readers and therefore discriminates against them. A careful usability study might also identify other ways in which the site is inaccessible to some protected class of users.

Ethics and Human Subjects

Several of these approaches to evaluating usability involve human beings as research subjects. The Belmont Report (The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979) defines principles and guidelines for the ethical treatment of human subjects. An important issue for every study is informed consent: participants must comprehend the benefits and risks of the study, and must be allowed to agree or decline to participate, or end their participation once the study is underway, without coercion. Usually, each participant is informed of the risks and benefits of the research through an Informed Consent form, which the participant signs to indicate consent.

Other important ethical issues include the anonymity and confidentiality of participants, any deception of participants as part of the study, the nature of physical and psychological risks for participants, efforts made to guard against those risks, and the tradeoff of those risks with anticipated benefits. Institutions that conduct research, including Grinnell College, must have an Institutional Review Board to supervise human subjects research with respect to these ethical concerns.

Usability studies typically do not involve deception and have minimal risks. However, informed consent and confidentiality are still significant concerns.

Since our laboratory exercises have education rather than research as their purpose, you do not need to sign informed consent forms, and we do not need approval for our exercises from the Institutional Review Board. However, we should respect confidentiality by handling the data we generate carefully and not identifying specific problems we encounter with specific people in the class.

Sources Cited

McCracken, Daniel D. ↦ Wolfe, Rosalee J. (2004). User-Centered Web Site Development: A Human-Computer Interaction Approach. Pearson Education, Upper Saddle River, NJ.

The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects. Accessed Sunday, 12 November 2006 at http://ohsr.od.nih.gov/guidelines/belmont.html

Usability Nielsen, Jakob (n.d.). Ten Usability Heuristics. Accessed Sunday, 12 November 2006 at http://www.useit.com/papers/heuristic/heuristic_list.html.

Nielsen, Jacob. (2000). Why You Only Need to Test with 5 Users. Alertbox, 19 March 2000. Accessed Sunday, 12 November 2006 at http://www.useit.com/alertbox/20000319.html.

Nielsen, Jakob (2003). Usability 101: Introduction to Usability. Alertbox, 25 August 2003. Accessed Sunday, 12 November 2006 at http://www.useit.com/alertbox/20030825.html.

Professionals' Association (2006). About World Usability Day (2006). Accessed Sunday, 12 November 2006 at http://www.worldusabilityday.org/about.

 

History

 

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Thu Nov 30 21:43:58 2006.
The source to the document was last modified on Mon Nov 13 08:30:50 2006.
This document may be found at http://www.cs.grinnell.edu/~rebelsky/Courses/CS151/2006F/Readings/usability.html.

You may wish to validate this document's HTML ; Valid CSS! ; Creative Commons License

Samuel A. Rebelsky, rebelsky@grinnell.edu

Copyright © 2006 Samuel A. Rebelsky. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/2.5/ or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA.