• English

Best Practices

Remote usablity testing

Your library wants to usability test the patron interface to its catalog, but the student who wants to do the evaluation is in a dorm room miles away. What can you do? Remote usability testing may be the answer.

Remote usability testing procedure

Remote usability testing is a variation of the standard usability testing procedure, with one major difference: the test user can be anywhere in the world that has a PC, and either a Internet connection capable of sending audio or a phone on a separate line.

Web remote observation software allows you to view the user's screen and mouse movements on your computer. The audio connection lets you hear the user's comments as they use the software. About the only thing missing is actually seeing the user use the product. We use Microsoft NetMeeting® as our remote observation software, but you may find that other packages (e.g., Netopia's Timbuktu® or CuseeMe Network's CU-SeeMe® Web) work better for you. NetMeeting is free Microsoft software and often comes bundled with a PC's operating system.

Compared with testing in the usability lab, remote usability testing can gather usability data from a wider range of users and is more comfortable for most users, who may get nervous in a usability lab with its cameras, microphones, and less-than-cozy feeling. It does however require more pretest preparation. For more information , see: How we do it: remote usability testing.

One last thing: Usability testing really does not require a lab. It does require eyes, ears, and a recording mechanism (pencil and paper work great!). Also needed are something to be evaluated and a user to evaluate it, of course!


Observing tester eye-movements in the control room

Heuristic evaluation

Heuristic review is a type of expert evaluation, where experts review a product's usability. It is an easy to learn method that can be quickly applied by library staff to roughly determine the usability of their library's various software products. The technique is based on the work of Nielsen, and has the following five steps:

  1. Several HCI experts compare a prototype to a set of heuristics ('rules of thumb') for developing easy-to-use user interfaces. We use a set of 14 heuristics. A mixture of domain and design experts work best as evaluators.
  2. Any usability problems found are evaluated for their severity and extent. It is important that the evaluators do not discuss their findings with other evaluators during this phase.
  3. The reports of each evaluator are compiled and grouped based on severity.
  4. All the evaluators participate in a brainstorming session. During this session: (1) Reports are grouped into a common set of problems and severity/extent ratings, (2) This set of problems is further refined into several problem categories, and (3) Solutions to the problems are proposed.
  5. A summary report of the brainstorming session is distributed to interested parties, and a course of action is determined.

While generally less expensive, results vary on whether Heuristic evaluation or usability testing is the 'better' technique. One study indicated that heuristic evaluation found more severe problems than usability testing [1], while the results of another study were that it found only about 20% of the total problems found in a usability evaluation [2].

Given the above findings, we use heuristic evaluation and usability testing. Heuristic review is recommended for a product in its early stages, as a quick and inexpensive method of finding major UI problems. Usability testing is recommended for use later in the development process, when the extra value of finding the problems real users may encounter justifies its cost and time.

For more detailed information on how we do a heuristic evaluation at OCLC, please go to our How we do it: heuristic evaluation page.