HEURISTIC EVALUATION

Rebecca Adisa
2 min readMar 14, 2021

What is Heuristic Evaluation?

Heuristic evaluation is a thorough assessment of a product’s user interface, and its purpose is to detect usability issues that may occur when users interact with a product, and identify ways to resolve them. Heuristic evaluation is a key part of designing a great product that users can easily engage with and find value in their interaction. It is a thorough assessment of a product’s user interface, and its purpose is to detect usability issues that may occur when users interact with a product, and identify ways to resolve them.The Nielsen-Molich heuristics state that a system should:

  1. Keep users informed about its status
  2. Show information in ways users understand from how the real world operates, and in the users’ language.
  3. Offer users control and let them undo errors easily.
  4. Be consistent so users aren’t confused over what different words, icons, etc. mean.
  5. Prevent errors
  6. Recognition rather than recall
  7. Be flexible so experienced users find faster ways to attain goals.
  8. Have no clutter, containing only relevant information for current tasks.
  9. Provide plain-language help regarding errors and solutions.
  10. List concise steps in lean, searchable documentation for overcoming problems.

How to Conduct a Heuristic Evaluation

To conduct a heuristic evaluation, you can follow these steps:

  1. Know what to test and how — Whether it’s the entire product or one procedure, clearly define the parameters of what to test and the objective.
  2. Know your users and have clear definitions of the target audience’s goals, contexts, etc. User personas can help evaluators see things from the users’ perspectives.
  3. Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
  4. Define the heuristics (around 5–10) — This will depend on the nature of the system/product/design. Consider adopting/adapting the Nielsen-Molich heuristics and/or using/defining others.
  5. Brief evaluators on what to cover in a selection of tasks, suggesting a scale of severity codes (e.g., critical) to flag issues.
  6. 1st Walkthrough — Have evaluators use the product freely so they can identify elements to analyze.
  7. 2nd Walkthrough — Evaluators scrutinize individual elements according to the heuristics. They also examine how these fit into the overall design, clearly recording all issues encountered.
  8. Debrief evaluators in a session so they can collate results for analysis and suggest fixes.

--

--

Rebecca Adisa

A product designer who is enthusiastic about designing user-friendly products through strategic research