Testing (II): methods for testing

User testing

article 1: Virtually Usable: A Review of Virtual Reality Usability Evaluation Methods, Dana Martens, 2016 article 2: Basics of Usability Testing article 3: Tools for prototyping & testing (e.g. Role-Playing, Processing and Advanced Interface Technology)

1. Cognitive Walkthrough

Cognitive Walkthrough is a formal method for evaluating a UI without users.

  • Focuses on first time use

  • Task oriented: requires tasks and walkthrough scenarios

  • Will users be able to follow this scenario? Can you tell a believable story?

  • Must be aware of user capabilities

article 4: Cognitive walkthrough procedure

Stages of Action Model, Norman (2001)

2. Heuristic Evaluation VR

Heuristic or guidelines-based expert evaluation is a method in which several usability experts separately evaluate a UI Design by applying a set of heuristics or design guidelines that are either general enough to apply to any UI of are tailored for 3D UIs in particular. No representative users are involved. Below a few different Heuristic setups are presented. Have a look at them and then choose which one suits your needs.

article 5: Heuristic Evaluation proces article 6: Comprehendable Heuristics for VR systems, Murtza, Youmans, Monroe, 2018 article 7: Online VR Heuristic Evaluation Tool article 8: Heuristics specified for Virtual Reality, 2004, accessed Jul 18 2018

3. Formative Evaluation

Formative Evaluation is an observational empirical evaluation method, applied during evolving stage of design, that assesses user interaction by iteratively placing representative users in task-based scenario’s in order to identify usability problems, as well as to assess the design’s ability to support user exploration, learning and task performance. Can be done in a formal and informal way.

article 9: Case study Evaluation of the therapist manual and interfaces of the Rutgers Ankle Rehabilitation System (RARS)

4. Summative Evaluation

Summative evaluations do what their name suggests, they “sum up” or statistically compare two or more different configurations of a user interface design, components of the design, or specific interaction techniques, by having representative users try out each version while evaluators collect quantitative and qualitative information. They can be informally applied, usually collecting just qualitative data, or more formally applied, also helping to collect quantitative data (time on task, error rate). They specifically differ from formative evaluations in that they must compare more than one design. Research involving users is crucial for VE perhaps more so than other types of interfaces as the technology is relatively new and varied, meaning expertise in VR is hard. For this reason, summative evaluations are particularly important as they help to compare specific I/O combinations and or interactions techniques. Summative evaluations typically occur after user interface designs are complete, and are comparing specific differences between configurations. As such, they are most appropriate for late­stage prototypes in which general usability has been established, but specific interaction or interface questions persist. Comparing 3D UI’s requires a consistent set of user task scenarios. Requires Representative Users, Generic, Qualitative and/or Quantitative

5. Task Analysis

article 10: Task Analysis

user research should focus on collecting the following five types of data, which you will use later during the task analysis phase:

  • Trigger: What prompts users to start their task?

  • Desired Outcome: How users will know when the task is complete?

  • Base Knowledge: What will the users be expected to know when starting the task?

    Required Knowledge: What the users actually need to know in order to complete the task?

  • Artifacts: What tools or information do the users utilize during the course of the task?

According to the UXPA’s Usability Body of Knowledge Site, the process of task analysis can be broken down into the following steps:

  • Identify the task to be analyzed: Pick a persona and scenario for your user research, and repeat the task analysis process for each one. What is that user’s goal and motivation for achieving it?

  • Break this goal (high-level task) down into subtasks: You should have around 4–8 subtasks after this process. If you have more, then it means that your identified goal is too high-level and possibly too abstract. As Don Norman (1998) said, users are notoriously bad at clearly articulating goals: e.g., ”I want to be a good mom” – where do you even begin? Each subtask should be specified in terms of objectives. Put together, these objectives should cover the whole area of interest—i.e., help a user achieve a goal in full.

  • Draw a layered task diagram of each subtask and ensure it is complete: You can use any notation you like for the diagram, since there is no real standard here. Larry Marine shares some helpful advice on the notation he uses, which is examined below.

  • Write the story: A diagram is not enough. Many of the nuances, motivations and reasons behind each action are simply lost in the diagram, because all that does is to depict the actions and not the reasons behind them. Make sure you accompany your diagram with a full narrative that focuses on the whys.

  • Validate your analysis: Once you’re happy with your work, review the analysis with someone who was not involved in the decomposition, but who knows the tasks well enough to check for consistency. This person can be another team member working on the same project, but you could also enlist the help of actual users and stakeholders for this purpose.

Last updated