Blog

All the stuff we think is nice or important

7 Methods for Discovering Usability Problems

Finding and fixing usability problems in an interface leads to a better user experience. We often think of usability testing as the only method for evaluating the usability of a website or application. There are, however, other methods that can help uncover usability problems. These methods can be broken down into empirical (usability testing, surveys, and analytics) or inspection methods (expert review, heuristic evaluation, cognitive walkthrough, and guideline review). Each has their pros and cons but all can be a part of an organization’s approach for uncovering problems and creating a better user experience.

1.    Usability Testing

Usability testing is the most popular method for measuring the user experience. Having a representative set of users attempt realistic tasks uncovers problems and generates valuable metrics. You can collect metrics at the task level (SEQ) and at the end of the study (SUS, SUPR-Q) that provide benchmarks against future improvements and as symptoms of problems (when the metrics are low). For moderated studies, a facilitator can probe on interactions to understand the root causes of behaviors. The actions, utterances, and metrics together provide evidence for what the problems are and what needs fixing. Cons: It can be difficult to find the right participants or the product versions (especially with B2B software) and setup the testing. Even basic usability tests cost money and take some time to plan, facilitate, and analyze. Pros: Watching just one user encounter a problem is often sufficient evidence for stakeholders to make immediate improvements.

2.    Expert Review

Having a trained expert, ideally in both human computer interaction and the domain of the application, inspect an interface systematically for problems is known as an expert review. The more an evaluator has observed usability tests and knows the goals and mindset of the target users, the more problems an evaluator is likely to uncover. An expert review is the most general type of inspection method available to researchers. Cons: There is well-documented evidence that independent evaluators reviewing an interface (or even watching a video of the same users) will identify different problems. What’s more, there is a sense that expert reviews can identify problems that aren’t really problems (false positives). After all, if a user didn’t encounter a problem, is it really a problem? It can also be difficult to find more than one expert to conduct the review. Pros: Expert reviews are relatively quick to execute and can uncover around a third of the problems found in a usability test. Despite the low overlap in problems identified, it’s usually the case that even when one reviewer doesn’t identify a problem another reviewer did, they usually agree[pdf] a problem is legitimate and addressing it will lead to a better experience.

3.    Heuristic Evaluation

Heuristics are general principles or rules of thumb. A Heuristic Evaluation, another inspection method, involves having a trained expert inspect an interface against a set of heuristics (often the 10 by Nielsen). Like an expert review, Heuristic Evaluations are best done with multiple evaluators working independently. The original intent of the Heuristic Evaluation is that the evaluator should only identify problems through the lens of the heuristics. They should not make up heuristics as they go along or identify problems that fall outside the heuristics. Cons: In practice, virtually all Heuristic Evaluations are usually just some form of expert review, with or without heuristics. It’s unclear whether following heuristics actually improves the quality of the review.  It can be difficult to find more than one expert to conduct the evaluation. Pros: A Heuristic Evaluation, like expert reviews in general, are generally quick to conduct and the heuristics can help focus the evaluation on known problem areas.

4.    Cognitive Walkthrough

A cognitive walkthrough, also an inspection method, puts the emphasis on how users would accomplish tasks. The idea is to identify users’ goals and how they attempt those goals in the interface. An evaluator then meticulously identifies problems users would have as they learn to use the interface. The evaluator answers a series of questions for each action a user takes to complete the task. Cons: The original method developed by Polson et al in 1990 [pdf] involved answering 8 questions at each step in the task (first action user should take, how will the user access the action, how will the user execute the action, etc.), which can make a cognitive walkthrough tedious and time consuming. Pros: Thinking about tasks can help in uncovering problems that a more general inspection might not uncover by helping the evaluator “think” like a user. An updated version of the cognitive walkthrough, called the Streamlined Cognitive Walkthrough[pdf], reduces the set of questions per user action to two. Will the user know what to do at this step? Is the user making progress toward the goal?

5.    Guideline Review

The guideline review, another type of inspection method, involves having an evaluator review an interface against a set of guidelines and best practices. Guidelines are more detailed than heuristics and have been around for quite some time. For example, the tome by Smith & Mosier contains almost 1,000 guidelines. Guidelines should be based on best practices, principles, and grounded in data, not opinion. Cons: Guideline documents have a tendency to grow over time in an attempt to cover every aspect of an interface. They can be tedious and time consuming to use—or worse, become irrelevant with new technology. Even with the detail, guidelines likely don’t cover everything in an interface, and still require interpretation. Pros: The detailed nature of guidelines means an evaluator can be more thorough in a review. Like other inspection methods, guideline reviews are relatively quick to execute and can be especially helpful on products that aren’t usability tested frequently because of budget constraints or testing logistics.

6.    Usability Survey

Using a combination of standardized usability questionnaires (like the SUS and SUPR-Q) along with other questions in a survey provides a good indication of how usable or unusable a system is. Questionnaires themselves usually aren’t detailed enough to tell you what specific problems there are or what to fix. However, asking participants in a survey to reflect on their experiences and report the problems they have encountered is an effective method for uncovering many of the top-of-mind issues. For example, I have no problem listing one or two issues about my most recent experience with my online health savings account (I can’t easily update the contribution. I’ve tried twice, and adding a new payee always takes too long). Cons: It’s unlikely to uncover detailed issues and relies on self-reported behavior, instead of observation. Pros: A usability survey is a quick way to uncover both the major issues and obtain a systematic benchmark for many products. It can be a very effective first step in an enterprise where there are dozens or hundreds of products that need to be measured.

7.    Analytics Software

Software can easily track the usage and behavior of users on websites and web apps (such as Google Analytics) or on desktop applications. Analytics software can tell you what functions people are using (or not using), what links people are or are not clicking, and where people come from or go to. If you know what to look for, you can detect problems or symptoms of problems in real time. Cons: It can be difficult to know why users are visiting pages, spending too little time on a page, or not using a function. With all the data, you’ll need to have some experience sifting through the noise to find problems. The software also may require a technical specialist to set up and administer. Pros: After set up, analytics software collect a lot of data for every user of your website or software. Because you’re using actual users, there’s no need to worry about unrepresentative users or evaluators. These methods can, and should, be used together. Think AND instead of OR when finding usability problems. http://ift.tt/1OMoN0n via MeasuringU: Usability, Customer Experience & Statistics http://ift.tt/1Z7jt2H