There are six steps in a test iteration for OCLC usability testing.
1. Client completes Request for Usability Evaluation form
The potential client, usually an OCLC User Experience (UX) designer, completes the Usability Evaluation form. This form is very brief, asking for just enough information for the ULab to get an idea of what type of product is to be tested and the target users.
This form also asks the client to suggest dates/times to meet with ULab staff to fill in the details of their testing needs. It is at this meeting that enough details are garnered to beginning actual planning of the utests.
2. Client meets with ULab staff and completes the Usability Checklist
After the Request for Usability Evaluation is completed, the client and ULab staff meet and complete the ‘Usability Checklist’. This checklist begins by asking when the testing must be completed and what are the three main goals of the utesting. By focusing the client on these issues BEFORE moving on to issues such as user descriptions, action items and deliverables, who will contact and schedule the users, and when the first draft of the tasks and other materials will be completed, a more realistic plan is developed.
3. Tasks and other test materials are created
After the meeting with ULab staff, the client begins drafting the tasks. The Ulab encourages the client to not spend much time on task wording, concentrating instead on (1) Identifying what Usability Question the task is to answer, (2) Developing the Quantitative Measures that will determine how the user ‘answered’ this question, and (3) Roughing out a task that asks the question in realistic terms and allows the Measurement to be taken.
For example, the Usability Question might be "Can the user easily find the Search button?". Its corresponding Quantitative Measure might be: User clicks the Search button as their first choice. The Draft Task could then be User does a search.
After the draft tasks, questions, and measures are developed, they are then edited by ULab staff to be objective, realistic, and if possible, connected into a single ‘story’. For example, the task User does a search may become You are doing research for a paper on Einstein for your History of Science class. Find out when Einstein published his Theory of Relativity.
Observing the eye-tracking session.
4. Test rehearsal is conducted
After all materials are in place, a rehearsal of the test is conducted. The rehearsal is identical to an actual test except (1) Tester characteristic requirements are relaxed, and (2) User behavior is not recorded or analyzed. Any test equipment or procedural problems encountered during this rehearsal are then resolved before the actual usability test. This rehearsal is conducted once for a product unless there is substantial change in the product or testing procedures.
Tester uses the eye-tracking computer.
5. Usability tests are conducted
With users scheduled, materials ready, and a successful rehearsal completed, testing is ready to begin.
During a typical test session, a user is read a brief introduction to the product (no more than the information they would have in real life that would lead them to use the product), instructions specific to usability testing, and a set of 1-3 profile questions to ensure this user meets the criteria to be a test user.
The user then attempts to complete the tasks using the product or procedure being tested. Typically, the user is asked to ‘Think Aloud’ while working on the task, saying what they are thinking as they work on the task. This process of both observing the user and having them verbalize their thoughts provides insights into not only the “what” of their behavior but also the “why” that motivated it.
The entire test is video recorded and streamed over the Internet so that product team members can view the test live, regardless of their location worldwide.
After completing the tasks (or if the test is stopped for other reasons), the user typically completes a questionnaire and is interviewed by a product team member or ULab staff.
Once the test session is completed, the client and ULab often discuss potential problem areas and possible solutions, as well as any procderual problems that may need to be correctred prior to the the next utest.
6. Analyses are written and distributed
For each test, the ULab transcribes any questionnaire responses and interview notes, and may also writes a brief analysis of the usability test. This Test Summary notes the task during which a usability problem was observed, a brief description of the problem, and one or more suggested solutions. This summary may also contain web links to the transcribed questionnaire and interview comments. The summary is distributed to the client.
Once the tests are completed, a Summary of Results of all the test results for each iteration or all test iterations is written by ULab staff. This summary highlights both the good and the not-so-good found during the usability tests, as well as summary statistics for the Quantitative Measures and a transcription of all user comments made during each test and interview.
A Suggested Solutions section is also provided, that lists the major issues found and suggests possible solutions for each, often with accompanying screen re-designs.
This Summary of Results is distributed to the client, the product team, and anyone else the client has indicated should receive the summary.