10 ways to avoid bias in your user testing research - Trymata
Avoid bias in your user testing

The goal of user testing is to gather genuine, honest, unfiltered feedback about the usability of your product. The challenge is that to get that feedback, the researcher must artificially re-create, as closely as possible, a genuine user session.

This is where personal biases begin to affect the research. Everything from the test setup to the analysis and implementation of the findings is filtered through our own perceptions and expectations. That doesn’t mean user testing isn’t worth it, or cannot uncover valuable and accurate insights. However, as UX researchers, we must actively work to eliminate bias from our research in any way that we can.

Here are 10 ways to start.

 

1. Ask open-ended questions

When asking testers about their opinions, preferences, and intentions, keep your questions open-ended rather than presenting this-or-that choices. For example, instead of asking “Would you be more likely to contact the seller through the phone number or the contact form?” try asking “How would you contact the seller?”

By listing your own set of possible answers, you effectively limit users’ feedback to only the choices that you’ve imagined. That means the answers you get might not really represent users’ true opinions – just their preference out of the answers you’re expecting.

Asking open-ended questions, on the other hand, allows your users to respond in ways that you may not have thought of. This way, the answers you collect are not skewed by your assumptions

 

2. Format tasks as goals, not instructions

The underlying logic of this rule is very similar to #1. Here’s an example: rather than saying “Find a link to the financial aid information in the navigation dropdown,” phrase the task as “Learn more about the financial aid programs.”

In this case, by telling users how to complete a task, you miss out on an opportunity to learn what they would do naturally. They may be inclined to find that information in a completely different way – perhaps via the search bar, the side menu, or the footer.

If you frame your tasks as a goal to be accomplished, and leave the user to choose how they will go about it, you’ll learn a lot more about how people process and interact with your site.


Read more: User testing task templates


 

3. Don’t push users towards a specific outcome

There’s always a temptation to try and push or coax users into giving the feedback that confirms your own hypothesis. If you suspect a certain element or flow has usability issues, don’t shoehorn it into your task script, or press users to say something about it.

Instead, try to structure your task script in a way that will lead users to engage with that element organically, and then observe their experience with it. If there is a problem, they will likely bring it up.

If you want to make sure your suspicions are explicitly confirmed or debunked, a good way to avoid influencing users’ feedback is to break it into 2 tasks. Make the first one a goal that will require them to engage with the feature. In the second, ask follow-up questions about their opinions, so users have a chance to mention any complaints they may not have stated already.

 


Want to learn more about Trymata’s user testing tools?

Trymata user testing free trial buttonBook a Trymata demo


 

4. Employ a neutral voice when writing your tasks

When you write your task script, use word choices that are clear, simple, and neutral. Don’t try to be friendly or funny; if users feel a human connection with the researcher, they have a harder time being critical. You need brutal honesty from your user testers, so avoid adding personality to your script.

On the other hand, don’t be too formal or rigid either. Stuffy, unnatural “researcher-ese” can be hard to comprehend for the typical user, and can lead to misinterpretation. Stay away from industry or company terminology, and always pick a short word over a long one.

 

5. Base the flows you test on real user data

Bias can be introduced into your research by the very flows you choose to test. When you write the task script for a user test, make sure the progression of user actions and goals is consistent with the way people really use your site or app – not just how you think they use it, or how you want them to use it.

Web data from Google Analytics and session replay tools like FullStory, Hotjar, or Trymata Product Analytics can help you gather insight on realistic user flows; use whatever information is at your disposal to create representative task lists.


Read: The problem of “How to test” vs “What to test”


 

6. Test competitors’ platforms for reference

It’s always a good idea to test your competitors. Seeing what people like and dislike on other platforms is extremely useful as you evaluate how to improve your own UX. It also gives you a better sense of what features and practices are most important to users, and helps establish what they expect from companies in your industry.

All of this information enables you to analyze your results and make judgments about the importance of different issues with a frame of reference besides your own opinions.

Even if you don’t test your competitors, ask users what other sites they typically use or are familiar with. This will help you understand and account for any preconceptions they might have while listening to their feedback.

 

7. Ask users what is important to them

When you ask users to make a judgment about something, probe further and ask why they made the choice that they did.

Suppose, for example, you ask the user to pick out a sweater they like from your product list. You make sure they look at the product photos and details, check for available colors and sizes, and compare it to other items before making their decision. Now, you may have some ideas about which factors were the most important for their decision; but if you ask the user themselves, their answer may be different from what you perceived.

By asking directly, you can avoid a biased take and hear what was actually going on in the mind of the user.


Read: Writing usability tasks for e-commerce usability testing


 

8. Collect quantitative metrics

One of the best ways to correct your biases and look objectively at your user testing results is to collect quantitative data. Whatever your preconceived notions and expectations, numerical, user-generated metrics like task usability, completion rates, and duration times make it possible to directly compare parts of the user experience.

If you collect the same quantitative metrics over several test sprints, you can also use them to evaluate and measure the impact of your design changes on the user experience.

 

9. Define a way to objectively weight your findings

In addition to user testing metrics like the ones above, you can use other methods to objectively weight and prioritize different issues, especially for qualitative feedback.

One method we like to use is tally clouds: as you watch each test video, write down the issues that crop up in a loose cloud. Every time the same issue recurs, mark it with another tally. Cluster similar issues together as you go: for example, search, filters, and sorting. After reviewing all your videos, count which issues and clusters have the most tallies. (You can use post-its for this method as well).


Read: Using word clouds to understand users’ first impressions


 

10. Compare notes with colleagues

Lastly, if possible, have one or more colleagues watch the videos and make their own notes and conclusions. Everyone brings their own perspective, and will get something a little different out of the same videos.

When you combine several peoples’ perspectives, individual biases cancel out and you can triangulate what all the feedback means with a bit more accuracy.

 

It may not be possible to remove every trace of bias from your research, but awareness goes a long way. Be conscious of the ways in which you yourself are biased, and do what you can to correct for them. Work collaboratively with teammates. Lastly, keep learning from every test you run. Do better than you did last time, and you’ll keep getting closer to the truth.

 

Sign up for a free trial of Trymata's user testing & product analytics

 



By Tim Rotolo

Tim Rotolo is a co-founder at Trymata, and the company's Chief Growth Officer. He is a born researcher whose diverse interests include design, architecture, history, psychology, biology, and more. Tim holds a Bachelor's Degree in International Relations from Claremont McKenna College in southern California. You can reach him on Linkedin at linkedin.com/in/trotolo/ or on Twitter at @timoroto