Evaluating the Design Without Users

The Three Approaches to Evaluating an Interface Without the User

Introduction

As part of an interface design team you will be required to evaluate your designs without users on many occasions.

Your users time is valuable and must be used sparingly.

Only after you have fully thought out your system should you ask users to participate in testing it.

The following techniques will help you to fully think-out your system.

 

Cognitive Walkthrough

The cognitive walkthrough is a formalised way of imaging people’s thoughts and actions with they use an interface for the first time.

Key to this technique is telling a believable story about an action a real user may take when using your interface.

Walkthroughs focus on problems that users will have when they first use an interface, WITHOUT TRAINING!

 

Cognitive Walkthrough cont...

In order to do a successful walkthrough you need to have a clear understanding of the people who will be using your system and an insight into what their actions may be.

One thing to remember is that a walkthrough is a tool for developing the interface, NOT for validating it.

You should do a walkthrough to find things that can be improved.* Ex1

 

Mistakes in Doing a Walkthrough

People generally make two types of mistakes when attempting a walkthrough:

First, the people performing the walkthrough do not know how to perform the task themselves.

To help prevent this, have a list of the individual actions in hand before starting the walkthrough.

Second, some do not realise that the walkthrough does not test REAL users.

You can imagine many more potential types of users than you can actually test and, therefore, should be able to identify many more problems.

 

What You Look For in the Walkthrough

In doing the walkthrough, you try to tell a story about why the user would select each action in the list of correct actions.

Then you critique the story to make sure it’s believable.

Keep the following four questions always in mind:

 

What You Look For in the Walkthrough cont...

Will users be trying to produce whatever effect the action has?

Will users see the control (button, menu, switch, etc.) for the action?

Once users find the control, will they recognise that it produces the effect that they want?

After the action is taken, will users understand the feedback they get, so they can go on to the next action with confidence? * Ex. 4.1.3

 

What do you do with the Results

FIX THE INTERFACE!

What do you do with the Results

Many of the fixes will be obvious and not difficult to implement.

Some, however, will be harder to come up with a fix for.

For example: those problems where a user doesn’t have any reason to think that an action needs to be performed. The solution for this is to eliminate the action or prompt the user to undertake this action.

 

Action Analysis

Action Analysis

Action analysis is an evaluation procedure that forces you to take a close look at the sequence of actions a user has to perform to complete a task with an interface.

There are two type of action analysis that we will look at:

Formal and Back of the Envelope

 

Formal Action Analysis

Formal action analysis is often called "keystroke-level analysis" because it is characterised by the extreme detail of the evaluation.

The detail is so fine that those conducting the analysis can often cases predict the time to complete tasks within a 20% margin of error.

 

Back of the Envelope Action Analysis

This approach does not provide detailed predictions of completion times.

It is used to reveal large-scale problems that might otherwise get lost in the forest of details the designer must deal with.

Both types have two fundamental phases:

The first is to decide what mental steps a user will perform

The second is to analyse those steps looking for problems. *Ex 4.2.1

 

Heuristic Analysis

Heuristic Analysis

Heuristics, or guidelines, are general principles that can guide design decisions.

The previous two techniques, cognitive walkthrough and action analysis, are task- oriented. They focus on the interface as applied to a specific task.

There are, however, some problems with this.

 

Problems with task-oriented Evaluations

First, there is never enough time to test every action, therefore some controls never get tested.

Second, each task is evaluated individually and therefore problems between tasks will not be identified.

Heuristics is task-free and is needed to catch some of the problems not identified by examining specific tasks.

 

The Nine General Heuristics

Jacob Nielsen and Rolf Molich have identified 9 general guidelines that we can follow in evaluating a system.

The basis of their system of evaluation is that several persons need to evaluate the 9 heuristics to identify problems with the the interface. No one person can find every problem.

Each will do the evaluation alone and then the results will be combined to give a clear picture of the problems. See table and example