HEURISTIC EVALUATION
What is Heuristic Evaluation?
Heuristic evaluation is a thorough assessment of a product’s user interface, and its purpose is to detect usability issues that may occur when users interact with a product, and identify ways to resolve them. Heuristic evaluation is a key part of designing a great product that users can easily engage with and find value in their interaction. It is a thorough assessment of a product’s user interface, and its purpose is to detect usability issues that may occur when users interact with a product, and identify ways to resolve them.The Nielsen-Molich heuristics state that a system should:
- Keep users informed about its status
- Show information in ways users understand from how the real world operates, and in the users’ language.
- Offer users control and let them undo errors easily.
- Be consistent so users aren’t confused over what different words, icons, etc. mean.
- Prevent errors
- Recognition rather than recall
- Be flexible so experienced users find faster ways to attain goals.
- Have no clutter, containing only relevant information for current tasks.
- Provide plain-language help regarding errors and solutions.
- List concise steps in lean, searchable documentation for overcoming problems.
How to Conduct a Heuristic Evaluation
To conduct a heuristic evaluation, you can follow these steps:
- Know what to test and how — Whether it’s the entire product or one procedure, clearly define the parameters of what to test and the objective.
- Know your users and have clear definitions of the target audience’s goals, contexts, etc. User personas can help evaluators see things from the users’ perspectives.
- Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
- Define the heuristics (around 5–10) — This will depend on the nature of the system/product/design. Consider adopting/adapting the Nielsen-Molich heuristics and/or using/defining others.
- Brief evaluators on what to cover in a selection of tasks, suggesting a scale of severity codes (e.g., critical) to flag issues.
- 1st Walkthrough — Have evaluators use the product freely so they can identify elements to analyze.
- 2nd Walkthrough — Evaluators scrutinize individual elements according to the heuristics. They also examine how these fit into the overall design, clearly recording all issues encountered.
- Debrief evaluators in a session so they can collate results for analysis and suggest fixes.