Usability testing mobile devices

David Travis, November 03, 2011

Usability testing is the gold standard for evaluating user interfaces. Although many people are familiar with usability testing desktop systems or web sites, fewer people have experience testing mobile devices. As we’ve discussed before, mobile is different from desktop and this applies to usability testing too. When you’re testing a mobile device, you need to make some important changes to your testing protocol.

The fundamental steps in running any usability test are the same. Here they are:

1. Get buy in
2. Recruit your participants
3. Develop your tasks
4. Finalise your prototype
5. Set up the testing rig
6. Moderate the test
7. Observe the test
8. Analyse the data
9. Improve the design

So how do you carry out these steps when you’re testing a mobile device?

Step 1: Get buy in

When you test any system for usability, mobile or desktop, you need to answer some basic questions to ensure that you end up testing the right product with the right participants. For every test, you need to answer the classic Five W’s (and one H) of journalism.

• Why are you running the test?
• Where will it take place?
• When will it take place?
• Who will be the test participants?
• What system (and what functionality) will you be testing?
• How will you collect and analyse the data?

The answers to these questions are typically captured in a Usability Test Plan. This is a document that gets everyone — managers, developers and other stakeholders — to discuss and agree on the critical decisions that need to be made. It means that when you present back your findings, no-one questions why you tested the wrong functions, or why you asked the wrong users to do the wrong tasks. Even if it is only you involved in the design of your app, you’ll still find the 5Ws very useful to clarify your ideas and structure your test.

Step 2: Recruit your participants

In this step, you’ll go about recruiting your participants. It’s obvious that you want participants to be representative of your end users but it’s just as important that your participants are regular users of the platform that you’re testing. If you’re testing an app for Android, but recruit predominantly iPhone owners, don’t be surprised when your app performs poorly in testing.

About 15 years ago I ran a usability test of a fingerprint identification system for the UK Home Office. My first participant really struggled with the system. It wasn’t because of usability issues with the interface but because the user hadn’t used a computer before and was struggling to use the mouse. At one point, he had the mouse upside down and so ‘up’ movements were being translated to ‘down’ movements of the cursor.

Nowadays, this situation is increasingly rare — the majority of computer users have some experience with Windows. Even if they don’t, once a Windows application has been opened, users can rely on a common set of user interface conventions, such as scroll bars, menus and icons. But user interface conventions for mobile are still in their infancy. For example, Android apps tend to include a specific on screen button to refresh the display, whereas iPhone apps have a hidden control: you pull down the screen to refresh. You don’t want your participants spending their time learning the UI conventions of a new platform, so make sure you recruit users with experience of your device.

Step 3: Develop your tasks

All usability tests are based on the same idea: you ask people to carry out realistic tasks with a system and you observe to see where they struggle. In a test of a desktop system, it’s quite usual to have someone using the system for an hour or more. This is reasonably representative of real-world use because people usually use desktop apps for extensive periods of time to get their work done.
Mobile is different. People may have your mobile app open to occupy two minutes in a queue at the supermarket. Or they may have a very specific question they want an answer to (“Where’s the nearest Chinese restaurant?”) Or they may be using the app in a specific context that is important to emulate.

For example, I was recently looking at several photography books in a bookshop in London. I wanted to check the prices of some of these books on Amazon so I fired up Amazon’s mobile app which allowed me to scan the barcode and check the price. But I was carrying out this activity with a computer bag over one shoulder, a heavy book balanced in one hand, my mobile in my right hand, trying to scan the barcode in the dim light of a bookshop. This is very different from running a usability test in a brightly lit lab where I can put the book on a desk. Context matters with mobile.

Step 4: Finalise your prototype

When testing desktop systems, it’s quite usual for the test administrator to prepare a ‘typical system’ and then ask the participant to work with it. This ‘typical system’ may have a smaller or larger screen than the participant’s own computer and the mouse and keyboard may be slightly different. But it’s not usually much of a stretch to ask the participant to use this system in lieu of the one that they use day-to-day.

Mobile is different. Users customise their mobile device more extensively than they customise their computer and your participant’s configuration may not reflect the standard, out-of-the-box implementation. For example, some apps may not be where you expect them to be on the participant’s phone. Some services (like location services) may be turned off. Asking a participant to use your ‘default’ system could make them feel like they have just rented a car in a country where people drive on the other side of the road: everything’s familiar but it seems to be in the wrong place.

Fortunately, there are several ways to get your prototype onto the participant’s phone so you can test it. The app doesn’t need to be fully coded: for example, you can create an interactive prototype in your favourite desktop presentation application and then export it to the mobile device as a clickable PDF. You’ll find an increasing number of toolkits that contain all the widgets you need to simulate a real app. There are also some apps around (like Interface) that will let you prototype right on the device itself.

Step 5: Set up the testing rig

One of the main problems faced by usability testers of mobile devices is mirroring the participant’s screen.

Sadly, there’s no robust software solution available just yet, which means we’re back to the early days of usability testing where we used cameras to record the screen. There are various solutions open to you: one of the simplest is to mock up a rig out of perspex (or Meccano — see the “Do it yourself mobile usability testing” slide share from Belen Barros and Bernard Tyers) and connect a web camera to it. These are cheap to produce and simple to make, but prepare yourself for screen recordings that are hard to read, especially as the ambient illumination changes.

Step 6: Moderate the test

Test moderation is a lot more challenging with a mobile device. As a moderator, it’s hard — sometimes impossible — to view the participant’s mobile device. Peering over your participant’s shoulder is — let’s be frank — a little bit weird. It also makes participants use the device differently, as they will try to hold it in a way that you can see the screen too. Because of this you’ll find it easier if you have a remote monitor that you can use that’s mirroring the participant’s screen.

Step 7: Observe the test

I find that one of the quickest and most effective approaches to test observation is to ask someone else to do it for you… Seriously.
Get the design team in the observation room and provide each person with a stack of sticky notes. Whenever they spot a usability issue or observe an interesting finding, they should write it down on a sticky note. Sticky notes have the benefit of being small which means people can’t write much — usually just enough to capture the essence of the observation.

Step 8: Analyse the data

Mobile usability testing needs a lightweight approach to analysing and reporting the results from a usability test. One rapid way of doing the analysis is to assemble the sticky notes from the previous step and ask members of the design team to group and organise the sticky notes on a wall (removing any duplicates). Once everyone is happy with the organisation, provide each group of sticky notes with a name that captures the usability issue.

The important point to remember is that your aim here is to describe the problems, you’re not creating solutions. That comes next.

Step 9: Improve the design

Usability testing only makes sense if you change the design to fix the problems that you’ve found. Steve Krug has a wonderfully pragmatic approach to this: for each problem, you ask, “What’s the smallest, simplest change we can make that’s likely to keep people from having the problem we observed?”

You then make the change, check you’ve not broken anything else, and see if you’ve solved the problem. I like this approach because it discourages people from undertaking a major re-design of the interface, which can take a long time to complete and often introduces a new set of usability issues to fix.

In summary

There’s no better way to get feedback on the usability of your app than by running a usability test. Although the process is the same as when testing a desktop app, there are quite a few differences in the details. Adjust your test to take account of these differences and you’ll be better placed to identify the real problems that real users will have with your app when used in an authentic context.