· 8 min read

UX Walkthrough: Anatomy of a Usability Test in Video Games (Part-2)

As promised, we will cover how to set up external usability tests, where the preparation is done at the developers' (client) and research agencies end.

Co-authored with Steven Gaston
This post was originally featured on Gamasutra

In part 2 of this post (in case you missed, Part 1 here), as promised we will cover how to set up external usability tests, preparation done at developers’ (client) and research agencies end, see how internal and external UX designers work in sync combining their final observation & analysis to produce unbiased results.

In Part 1, we saw how most studios typically conduct usability test in ways that allow biases to creep in which could drastically alter the results.

We are happy to know there are studios that have a more sophisticated UX pipeline & UX designers (some of your comments hinted at that) in place who conduct sessions in a clinical setting. Our process will outline the ideal practices that should be a part of the DNA of any usability session.

[bctt tweet=”Part 2 of a handy #UX walkthrough – how to setup external usability tests for games #gamedev” username=”GameAnalytics”]

The Process

STEP 1:  Pre-Test Prepping

We conducted an external usability tests in a neutral setting to get unbiased results. Here we have two parties involved:

The Developers (Clients): Comprises of the studios in-house UX designers  game designers, producers, UI artists and other stakeholders.

The Agency: A moderator like Steve and an assistant who might help him set the test and take notes during the play sessions.

Setup

Client Side (Developer) Setup: From the developer’s end, the team identifies (apart from general usability & interaction) the features we want to test; e.g. core loop actions, social actions, crucial aspects like on-boarding etc. Others include things like budget for the research facility and number of participants involved, case building for usability test for directors, defining scope, negotiation with the agency.

Agency Side (Steve) Setup: First we identify the objectives of the research. For example, is there a particular focus on the on-boarding experience? Are there social elements that need to be tested?

We then take these objectives and combine them with the Ipsos UU philosophy of conducting research with real people in real life. We develop a research design that is as close to reality as possible.

Given the constraints of actually conducting the research itself, we still conduct the research in a research facility so that we can have the cameras set up for the designers/developers to view in a separate room.

Once the research design is developed, we put it into place. Typically, the design looks like this:

  • A pre-test screening process allows us to recruit the right players from different walks of life who may vary from fairly new to expert level in the genre being tested for. Players chosen have to be genuine as they tested to validate what they are saying pre-screening. Participants also receive a small compensation for the time they devoted to testing after completing the session.
  • We usually conduct one-on-one sessions but may conduct paired sessions when we are looking to test social elements.
  • Participants are brought in to a research facility where a couple of cameras have been set up: one is a wide angle to see the participant’s face and reactions; one is from behind the participant, zoomed in on the device so we can see their finger movements (whether they play with their index or thumbs is of interest); we also set up Airplay to have a back-up view in case we needed it (e.g. if people’s hands block the screen too much).

Camera Setup For Play Test

Developers Observing In The Next Room

  • Steve moderates the sessions while the developers/designers sit in a viewing room watching the TV screens.
  • The sessions begins with about 10-15 minutes of getting to know the participant. We ask them to tell us about themselves (so we can understand their everyday life and how the games they play fit into their schedule) and their gameplay habits (so we can understand what games they play, how and why).
  • We may even get them to show us one of their games if it’s the same genre as we are testing. Once we’ve done that, we get them to play the test game for 20-30 minutes (which is about how often they’re likely to play the game in their first sitting). As they play the game, Steve asks them to let him know if they come across anything they particularly like, dislike or find confusing which we then talk about later.
  • During the sessions, the developers are not only viewing the players from a different room but also taking down notes based on their observations. These can range from identifying players getting confused during the tutorial to the players specifically mentioning how much they love the animation and other noteworthy reactions.
  • The reason we have the developers writing their notes on post-it notes (as well as in their own notebooks should they so desire) is so that at the end of each session, they can place them all on sheets of paper we label at the beginning by themes (e.g. UI, animations, tutorial, motivations).

  • The post-it notes are also used to later develop user narratives & persona bits.
  • On the day following the sessions, we all get together and conduct activation debriefs. This is where we go through all the notes and identify key action points for the team moving forward. Steve adds in his own views as an external source who may see things slightly differently from developer’s team.
  • Based on the de-brief, the team comes away with actionable steps to take forward in the development of the game.

[bctt tweet=”These best practices that should be a part of the DNA of any usability session #UX #gamedev” username=”GameAnalytics”]

Conclusion & Insights

Insight 1 : Reason for segregating observers, players and moderator  This is done to make sure unbiased viewpoints and observations are recorded with no way one party affecting/influencing the play-test and observation of other party.

We deliberately do not allow Developers to come in contact with the play testers so as to avoid any of the biases creeping in (play testers can sometimes feel obliged to say nice things out of the need to be courteous or not hurting developers feeling or getting intimidated by being observed by lot of people), hence they can’t play as naturally as they normally do.

An insight of how even presence of moderator can skew the players test behaviour

In one of the sessions, we texted Steve to come to our room to fix a video feed, and left the play testers totally on their own, we saw a dramatic change in their behaviour and attitude towards game being tested and their favourite benchmark game! They immediately put down the test game and picked up their favourite game!

Insight: Removing the moderator from the equation even for a small period of time allowed players natural behaviour/true opinion to surface. We made sure we did this deliberately. As part of the test from next cycle onwards.

Insight 2 :  Importance of  video recordings of play session

Why do video recordings? Isn’t observing people  first hand and taking their feedback good enough?

People are oblivious and sometimes say things contradictory to their actions!

For a particular control scheme in one section of the game, players were asked by the moderator after the players had play tested what their experience was and their understanding of that particular feature. Players said it was very easy to understand and taking the action/feature in question was very straight forward and smooth.

Cut to the video recording that we analysed later of players tackling that feature and we could see from their action (tablet device and player hand movement close up camera):

That Player was confused and struggling with the controls, tapping furiously, their expressions confused, and even though they managed to make sense of it it was after much effort but when they recalled later on, they were confident that they did well and sailed through smoothly.

There could be a number of reasons for this:

  • Players feel the controls are no brainer and they are intelligent enough to figure it out so they don’t want to lose face by admitting it.
  • It’s true that player might have struggled but then figured it out, they might not remember the period of struggle during their play-test but only the solution part and hence might not comment on it accurately.

This is where recording the game play comes handy. So we just don’t take the players words at face value. We can go back in time (via recordings) and cross check player’s claims with their actions.

[bctt tweet=”Here’s how #UX designers can work in sync to produce unbiased results #gamedev” username=”GameAnalytics”]

Summing it all up

So there you have it. In this two part series we have taken you through some of the issues witch a lot of the UX testing organisations are doing and presented you with an approach that we have developed that we feel effectively leads to more insightful testing. Our approach is beneficial in the sense that it:

  • Eliminates a number of inherent biases with UX testing;
  • Focusses on direct observation for deeper insights;
  • Is more collaborative for the development team;
  • Leads to actionable next steps for everyone.

We have had a lot of success taking this approach and we hope that what we have shared can help you in your next UX testing session!

If you liked this post, you can check out my other Game UX Deconstructs