...making Linux just a little more fun!
Software developers are good at making programs that work. We're less good at making programs that do what the users want, or are convenient to use. That's because developers often misunderstand what users need, even when they try hard to "think like a user". My development strategy is a common one: (1) get requirements from management, (2) draw out a design, (3) code it, (4) debug, (5) beta test, (6) rollout, (7) DONE! (Except documentation and periodic maintenance, of course.) But the past few months I've been fortunate to work with a woman who's getting a PhD in Cognitive Science and leads workshops in user-centered design and usability testing. I've been amazed at how well these techniques identify missing features and misfeatures in programs, things developers would not have considered. I've also been fortunate to have an enlightened management team willing to try these techniques in our projects. So I wanted to share a bit about these techniques.
Usability testing is based on the premise that it will happen whether planned or not. Either the developers will do it in their labs, or the customers will do it after they've bought the product. The latter leads to frustrated and angry users, and an expensive redesign down the road. Either you build a prototype and test it with typical users, or your version 1.0 is a de facto prototype, a beta disguised as a final product.
It may surprise people that many these principles were already known in the 1970s. Surprising because they still haven't trickled very far into programming practice twenty-five years later, as anybody who has sworn at a perfectly-running but incorrigible program -- or wanted to throw a version 1.0 off a cliff -- can attest. One of the seminal research papers is Designing for Usability: Key Principles and What Designers Think (PDF, 1.6 MB), written in 1985 by John D Gould and Clayton Lewis, two IBM researchers. The paper discusses three principles of user-centered design: (A) early focus on users and tasks, (B) empirical measurement, and (C) iterative design. It shows how designers often don't do these even when they think they are, and then presents a case where the principles were successful. The paper has amusing 1980s assumptions; e.g., Wang word processors are common, computers are mostly "terminals", managers don't have computers, the Apple Lisa was recent, etc. The "case" was a project to build a dictation system with a telephone interface. That is, a manager would call the service, press a few buttons, record his letter, then later a secretary would later type up a paper copy and deliver it.
The first thing the developers did was to think about the users, as discussed below. They then invited users into the design process, to tell the developers what they wanted. The developers built prototypes and devised usability tests to verify they were going in the right direction. A usability test is an empirical measurement; e.g., can 80% of users perform a specific task correctly in X minutes with only Y type of help? What mistakes do they make? Do these mistakes suggest defects in the design? This process is iterative, meaning feedback leads to changes in the prototype, which leads to more feedback, which leads to more changes, etc. Eventually the suggestions become fewer and more trivial, meaning you are close to completion.
The prototype had a table-driven user interface so changes could be made easily. Keys are linked to API functions via a keymap, and some keys lead to other keymaps (submenus). Output messages are kept in another table. This allowed them to reorganize the user interface based on user feedback without touching the underlying code, and they could also add additional user interfaces for different types of users. Another advantage was that when the prototype was finally deemed acceptable, the work was done: the prototype was the final product. ("How long will it take you to implement this for real? ---No time.")
This whole process led to major benefits which would not have been possible otherwise. Early usability tests showed that a "dictation system" was not really worthwhile: the users didn't take to it and it was inefficient. What the users did like was the ability to listen to messages they or others had recorded. This was an unintended side effect but it became the primary feature. So a "dictation system" became a "voice message" system in an era when voice mail was unknown. This would never have happened in a linear development process where the design was fixed at an early stage, as in my step 2 above. Even if the users were asked at the beginning what features they wanted, they could not have said. They had to actually play with the product before they knew what they wanted. But by the time they saw the product in the beta test it would have been too late, especially if I had not had the foresight to make the user interface code isolated and flexible.
User-centered design doesn't replace the linear development model; it goes on top of it. Every stage includes user analysis and usability testing, and occasionally discoveries require looping back to an earlier stage. But you're still going in the same general direction. Just expect a lot of build-test-refine-test-refine cycles.
The rest of this article is a cookbook of ways to evaluate the usability of your product. These are just a few ideas, the tip of the iceberg. If you know a usability expert, I would highly recommend sitting in on a design session or hiring their services, because they have many ingenious ideas up their sleeve that I am only beginning to explore.
When recording user feedback or brainstorming, it's helpful to write in two columns. On the left put observations; on the right put implications for the product. For instance, "several users thought they were supposed to do X" is an observation. "Have a popup window that explains not to do X" is an implication. Just list implications for later analysis; don't dwell on them now. Dwelling on implications takes time away from getting feedback, and may distort the quality of the feedback.
If you're concerned about how long this will take, remember that usability research can be done simultaneously with other development, especially if more than one developer is available. This initial stage may take longer, but the results will be seen in the quality of the product, and you may even be able to make up time by avoiding mistakes.
Also, keep in mind that it's OK to defer some features to "phase 2". At some point you have to get a working product out the door. Maybe production use will uncover flaws more significant than the phase 2 features. Or maybe the users will decide those features aren't that important after all, and they'd rather have something else in phase 2.
Once you've identified some questions and ideas that work, make them into a checklist for future projects. That will make ongoing design all the much easier.
Beware of know-it-all supervisors or buyers. If they aren't the main users themselves, they probably understand less than they think they do about what system would be the most productive and satisfying for the users. Sometimes this is difficult because the supervisors are your clients and won't allow you to talk to the users. In this case, try to impress on them how essential interaction with users and usability testing is for the quality of the product. Or maybe it's a sign not to take that job, since the clients will no doubt blame you for any shortcomings in the product.
If you can't observe the users in action, you'll have to use another means such as interviews, written answers, secondary users, or managers. But be aware this information will likely be lower quality.
Sharing your impressions with the users and managers will often elicit further suggestions. It'll also impress the managers with your thoroughness and insight.
Beware of underestimating the diversity of users. Also beware of how difficult a "simple" task can appear to a user, who doesn't have the developer's background knowledge.
E.g., in a doctor's office, a Patient object has attributes for name, phone, insurance ID, and medical history. A Drug object has the drug's name, principal effect, side effects, indications (who should use it), contraindications (who should not use it), interactions with other drugs, a list of manufacturers/brands/prices, and the doctor's personal notes about the drug's effectiveness. The doctor logs in, finds the patient and his chart, writes notes in the chart, browses the drug selection, and writes a prescription.
Decide on a test that will measurably show whether the product is getting closer to its goal.
For general open-source products like KOffice where there are thousands of users worldwide, it may make sense to continue the practice of releasing alphas and betas and having a bug-tracking database, but also send out more focused surveys on what is good/bad about the product.
I hope this gives you a taste of what is possible, and perhaps piques your interest in learning more about usability research. There are several good textbooks available from your friendly college bookstore, including Human Aspects of Computing edited by Henry Ledgard, Usability Testing and Research by Carol M Barnum, User and Task Analysis for Interface Design by JT Hackos and JC Redish, and others. These are the ones I consulted for the checklist above. Good luck and happy designing.
Mike is a Contributing Editor at Linux Gazette. He has been a
Linux enthusiast since 1991, a Debian user since 1995, and now Gentoo.
His favorite tool for programming is Python. Non-computer interests include
martial arts, wrestling, ska and oi! and ambient music, and the international
language Esperanto. He's been known to listen to Dvorak, Schubert,
Mendelssohn, and Khachaturian too.