6 Reasons Why User Testing Fails

Ghostbusters

Ever found yourself in this awkward situation?

A friend is asking for your feedback about a prototype that he spent 4 months building, and you’re wondering whether you should tell him how horrible it is, or just smile and encourage him to go on with it.

I’ve been there on several occasions, and that’s why I decided to write this post.

In my experience, here are the six main reasons a user test is doomed to failure.


1: Testing only one prototype

Creating rapid throw away prototypes makes you less emotionally attached to them, and more willing to modify/discard them if you receive negative feedback. You also get better feedback when users can compare different solutions.

Inspired by test-driven development, I often decide what I want to test before I start prototyping. This prevents me from getting lost in the detail or wasting my time adding unnecessary features. I use simple tools, like Apple Keynote, to minimize distractions and shorten the time required to build and test prototypes.

When you test multiple prototypes, make sure you randomize their order to reduce learning effect.

 

2: Selecting users randomly

Sorry to tell you this, but your mom, your girlfriend, and your buddy are just being nice to you when they say "this looks good!"

Relevant results require relevant users, and to get relevant users, you need a strict filter.

Before posting an ad on Craig’s List, clearly define your selection criteria, create a comprehensive survey and provide a link to it in the post. Then run the ad for a couple of weeks, and pick the best matches for your criteria.

 

3: Not knowing what to measure

Decide what you will change between different prototypes (the independent variable), and what you will be measuring (the dependent variable).

For example, if you want to determine whether to use list or a map to show search results, the independent variable is the presentation format (map vs. list) and the dependent variable is the time needed to find and click on a specific search result.

Avoid changing more than one independent variable in a single test, otherwise you might not know which variable lead to a specific result. In the previous example, if you show a map on the left in one prototype and a list on the right in another, it might be hard to tell whether locatoin or format made it easier to find the target result.

 

4: Jumping into the test too quickly

user testing

Remember what you’re testing? prototypes, not users. And you need to tell them that!

Before you run a test, give users time to familiarize themselves with the product and the setup. Get them talking, get them comfortable, and reassure them about the privacy of the information you are gathering.

The first test or two will usually help you get a feel for how to run the following ones: Are there common questions you need to address early on? Is your code logging events correctly? Are you measuring the right variables? Are you hiring the right users? Are you asking the right questions?

It’s important to have at least one pilot test to perform all sanity checks, and exlude its results from your final analysis.

 

5: Focusing only on what users are doing

When users are quiet, they are thinking; they are trying to navigate a mental map they’ve created about your product. They are looking for the next step in that map and trying to find a matching step in your product. They are silently thinking about dozens of questions. Respect their silence and you might be missing great opportunities.

Thinking aloud is one of the best ways to get into the users heads and dive beyond the obvious. Instead of trying to guess what users are thinking, good usability professionals find the right moments to ask users about to think aloud, without interrupting their flow.

I usually record videos of user studies to use as reference alongside event logs. In addition to using them as references, videos are great for compiling a summary of the study to share it with team members. Compiling a 15 minute highlight of the key moments in user tests and showing it to stakeholders often lead to great discussions.
 

6: Not following up

The test is not done when the tasks are over. That’s when users relax and can discuss their experiences with you. That’s your opportunity to get the truth behind the measurements.
 
Prepare a list of interview questions, and create a healthy mix of multiple choice, yes/no, scale, and open ended questions.  Ask about what they found intuitive, what they disliked , what was confusing, and what they would change.

And don’t limit yourself to questions you’ve already prepared; sometimes it’s better to skip some of them to further investigate an insightful answer or remark from a user.
 
In addition to compiling quantitative and qualitative summaries, try to find correlations between these two worlds. For instance, users who said that the app’s menu structure reminded them of Facebook’s navigation model were able to locate items faster.

Good user testing isn’t just about collecting data; it helps you see through data into mental models and goals.

No comments yet.

Leave a Reply