Library:Web Design Process/Test
Overview
Testing will take place over multiple iterations, in parallel with as much of the design and development as possible. There are three main phases to testing, though each of them will most likely be iterated multiple times. I order of increasing complexity they are:
- Message Analysis: 5-Second Test,
- Task Analysis: Navigation Test, and
- Task Analysis: Interaction Test.
Each of each type of test has the same basic elements, as laid out below:
- Goal
- Requirements
- Design
- Analysis
Message Analysis: 5-Second Test
Goal
- To provide context that the primary message is received so that users understand what this site is trying to accomplish.
- Do they “get it”?
Requirements
- A mock-up prototype of the screen. It does not need to be interactive, but should have copy (text) on it.
- A laptop to display it on is ideal, though a printout would be sufficient.
- A script. See "Questions" below.
- A helper. The test can be done solo, though it is easier to manage in pairs.
- Participants. Choose a high-traffic area, such as the IKBLC. The participants are therefore likely to be mostly undergraduates. Try to select from a range of people, but input from staff and faculty is not critical at this stage.
Design
- Create a script that you can use to talk to participants, such as the following:
Hey there, do you have 2 minutes? Can you help me with something? I work for the library and I need your opinion on something. We’re designing a new web site and we want to know if we’re doing it right. In a moment, I’m going to show you a web site and for just 5 seconds and I want to see if you can understand what this site is trying to do.
- It is not necessary to memorize the script exactly, but try not to vary your introduction and questions too much as they may add unnecessary variation to the results.
- For questions, use something like this:
- "What did you see?"
- "What do you think the site is trying to do?"
- "If you were to [perform a typical action], how would you do it?"
- For example, in the Search Portal test, the question was "If you were to look for a book, how would you do it?".
- Come up with a pass/fail metric. Most likely, it is whether they can satisfactorily answer Question 3 above.
- Set up a spreadsheet called "Test Results" of something similar for recording your results on. Using a service such as Google Docs can make collaboration on this easier. Some suggested columns include:
- Participant ID. This is useful to keep track of participants during analysis.
- Participant Type, i.e., student, faculty, librarian, other staff, visitor, etc. This is helpful for demographic breakdowns. If it is not clear which group the participant is in, make sure to ask. Also, make sure to specify "librarian", as they will appreciate evidence that they have been consulted.
- Q1-Q3. Try to note down as much information as possible from their responses.
- Pass/Fail
- Other Keywords. This can be done later by referring to the responses to Q1-Q3. A simple keyword analysis can be done, looking for the presence of certain terms or their synonyms, such as "search", "image", "tab" or "catalogue".
- Try to get at least 8 participants per session.
- Perform at least 5 sessions, or until pass/fail ratio is stable and satisfactory.
Analysis
- The analysis at this stage is quite simple. Calculate how often the various terms are used (percentage of total). Calculate the pass fail percentages per session, as well as the overall pass/fail rate. Graphing the change in the pass/fail ratio can graphically communicate the effect of the testing procedure:
Goal
- Is the navigation easy and helpful?
Requirements
- A navigation prototype. It should be low-fidelity but usable for testing labels, copy and options.
- A testing environment consisting of:
- 2 laptops, a mouse, and an extension cord
- A table & chairs
- A sign, saying "FREE CHOCOLATE BARS! Help up test our design for x minutes" or something similar.
- Guerrilla testing software, such as Silverback.
- Chocolate bars, or similar incentives.
- An assistant. You will be busy with the person you are testing, so it helpful to have a person to assist and field questions from passersby.
- A high-traffic, public location, such as the main foyer (Level 2) of the Irving K. Barber Learning Centre.
- Participants. Guerrilla-style testing means that you will not have a lot of control over the demographics of participants. This is not too important in this test, but becomes more of an issue in the next one.
Design
- Design a set of tasks that lead participants to perform all of the options available in the UI. These tasks should be created with the assistance of the Librarian Consultation Group. If there are more complex options, make sure that there are multiple tasks that cover most of the primary uses.
- The tasks in this test are just to see if users are heading in the right "direction". There is no need to them to continue with the task once they have ade their original navigation decision.
Sample Tasks:
- Here are the tasks used for the Search Portal navigation test, for reference. They are not ideal, and could have been improved with more librarian consultation.
|
Test Structure:
- ~13 tasks, ~3 tasks/test (keep sessions 5-10 mins), ~20 tests/session, ~3 sessions
- discussion, analysis, design, development between each session
- keep tasks the same between sessions for comparative analysis
Analysis
Task Analysis: Interaction Test
Goal
- Can people find what they’re looking for?
Requirements
- same as Navigation Test
- more focus on specific user groups, if possible. Now is the time to invite Librarians to do testing. Communicate clearly the goals of the test, and let them know that a chance for more feedback will be forthcoming
Design
Questions:
- specific and interaction-focused tasks
- can they achieve their goal?
Test Structure:
- modify Navigation tasks based of feedback from previous test and further consultation with Librarian Consultation Group
- same task/test/session structure
- at least 3 sessions, more if possible.
- focused sessions with key patron types?