Assignment 3 Usability Evaluation

Hello, this part of the lecture continues the overview of a number of

usability evaluation methods.

The first method of this part is a checklist evaluation.

The idea of this method is fairly simple.

An evaluator or a group of evaluators armed with a checklist that

contains a predefined set of usability principles or heuristics.

Go through an interface and

identify all places where the UI violates these principles.

Here is an example of search heuristic.

This one is solution-oriented.

Using this kind of heuristics an evaluator will come up with

a set of recommendations.

In practice the elevation research heuristic works fine for

well studied domains, for instance, mobile e-commerce.

The second one is an example of an interaction or

experience-oriented heuristic.

These heuristics allow to come up with findings of different kinds including but

not limited to interaction problems.

Since a checklist is the main tool used in these methods,

results of an evaluation highly depend on the choice of the checklist.

Therefore you should very seriously approach that choice.

Here is a list of different sets of heuristics for a start.

The last one is not appropriate for evaluating mobile apps but

still worth mentioning.

The great thing about a checklist evaluation is that it's cost-effective.

Plus the use of solution-oriented lists of heuristics does not require expertise,

either in usability evaluation or in the subject area of the app.

A checklist evaluation can be combined with any design workflow.

In fact, there is a classical method called formal usability

inspection that combines the use of task performance questions similar

to those from cognitive workflow and a specialized checklist.

A design review is a kind of an informal design critique session

where a group of people give feedback on designs.

Reviews differ from design workflows in two ways.

Firstly, reviews are more informal,

meaning that there is no predefined procedure to follow.

Secondly, workflows usually provide deeper analysis than reviews,

since reviews are more rapid and cost-effective in nature.

As a result, design reviews can be used not only for usability evolution but

also for evolution of, preliminary at least, of product concepts for example.

Within design review you can conduct competitive audits in order

to investigate some aspects of competitors your product doesn't have.

Or to discuss the design from different perspectives, for

instance the feasibility of a particular feature.

We will discuss how to conduct design reviews in detail later this week.

The next method is formative usability testing.

Which involves observing participants who are representative of real or

potential users while they perform tasks within an application.

Because the method is aimed at discovering interaction problems and

the causes, it implies the interaction between a participant and

the moderator during the task performance.

For example, the participant may be asked to constantly verbalize her thoughts and

feelings while working with the app, which helps to collect richer data.

There is even a variation of the method called paper prototype testing,

employing the flexible nature of paper sketches.

The sketches can be altered by a participant and moderator together right

in the course of a test to investigate possible solutions of found problems.

A similar approach can also be applied when you use high-fidelity prototypes.

The approach is called RITE, which stands for

Rapid Iterative Testing and Evaluation.

It implies making changes to the user interface after few or

even one test session to fasten the information of a RITE design.

There are other variations of the method that deal with the location of

a participant and the moderator during a task session.

In general, mobile industry leans toward less labor intensive research methods,

so a certain kind of field studies guerrilla usability testing,

is very popular nowadays.

We will examine the guerrilla usability testing in detail next week.

Now it's possible to conduct remote moderated usability testing

of mobile apps using search tools as go to assist reflector, riser and some others.

Remote moderated studies are harder to set up for

mobile than it is for web, it's true, but advantages are clear.

Remote moderated usability testing, as well as guerrilla usability testing,

are cost-effective ways for formative usability evaluation.

The last method that I consider important to mention is split testing,

also referred to as A/B testing or multivariate testing.

The method is used for comparing several alternative versions of

the same part of an interface according to a set of metrics,

product KPIs, in order to choose among these versions.

Split testing is applied to apps already in a production environment.

And effects not the whole user population but a part of it.

Each user from this subset is randomly assigned to the original design

called control and one of several alternative designs called variations.

The test continues until the difference in results among the control and

variations will be statistically significant.

Then, using the method, you can test small design changes

like the placement of some button and variations of icons.

Or substantial changes, like different navigation designs.

And changes in layouts of several screens simultaneously.

Note that split testing is not a usability evaluation method.

It does not allow to take into account the confounding factors I was talking

about at the beginning of this lecture.

Remember the example with the task of renting a car.

Split testing does not allow to distinguish among those who

came to the app to check out prices and those whose goal was to rent a car.

I'm not saying that the method is useless, quite opposite,

split testing is a very helpful tool in toolbox of every designer.

But you should be aware of this feature.

All right, I hope it's become clear what usability evaluation methods are,

and which of them you can use for what purposes.

Thank you for watching.


Assignment 3: Usability Evaluation 2 Usability Evaluation Describe the easy and difficult aspects of creating and conducting an online questionnaire . An online survey provides place where people can share their opinions and ideas. The internet has provided anonymity and the ease of use that help customers to speak freely. In addition, setting up the effective online survey is not required to be very expensive and complicated. In fact, it should cost only time of the participants. It is enough for asking a certain group of people questions about what they think about different views (Wainer & Braun, 2013). While writing survey questions, there are three issues needed to consider types of questions, wording of questions and order of questions. Questions, for instance can be open ended or closed. This is where the participants are requested to write a response (open ended) or even select from a defined set of possible answers (closed). In survey and questionnaire, most of the questions should be closed because survey goes more quickly and there is no chance that the participant will become tied down answering one question leaving the remaining questions in the survey. By making each question brief, the respondent will not lose focus on the question that will provide the answers needed. If choices for answers are provided, ensure that option will not give the respondent confusion as to how the question was delivered and understood so that exact the answer is met. Instructions should be provided before answering the survey. The respondents should know the objective in conducting the survey. Moreover, creating an online questionnaire is professional. Therefore, the respondents should take questions more seriously if it is presented in

0 thoughts on “Assignment 3 Usability Evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *