When thinking of usability testing sample size, there are some common mistakes that people make. Those mistakes result in incomplete or incorrect usability testing results. In order to stay away from this problem, we have found the most common usability testing sample size mistakes that you must avoid. Moreover, we will also mention the main causes of these mistakes, along with the proper steps to get rid of them.
Top usability testing sample size mistakes
-
A skewed sample
The essence of each usability test is random sampling. The polar opposite of random sampling is biased sampling, and it will skew your test results significantly.
Simply put, biased sampling occurs when you choose a population sample in such a way that excludes some of its representatives. As a result, you lose a segment of data from a key part of your target audience. This means you will likely miss pain points that would have affected that demographic.
What causes this to happen?
You get skewed sampling when you limit the accessibility to user testing in one way or another. If you conduct the usability test during a limited window, for example during a promotion or only on weekdays, the data will be skewed in favor of those you reach during that timeframe. Those who aren’t available or reached during that window will not be represented in the results.
As well, if you are conducting in-person testing and pick a location that’s hard to access through multiple means, this will exclude those who don’t have access to a car.
Read More: Accessibility in UX: The case for radical empathy
How to make sure you have a random sample
The best way to ensure you have a random and robust sample is to make usability testing accessible to your user persona. This way, your participants will accurately represent your target audience. That includes giving ample time for testing and making locations easy to access.
-
Insufficient test sample size
A common usability testing mistake is stopping a test as soon as you reach the target confidence level in your design. In this situation, it’s very likely that the sample size of your test is too small and the findings you obtain are invalid.
Why is this the case?
In order for your test to reach statistical significance, it’s important that you have a large enough sample size. While we suggest a minimum of 5 participants as a baseline, it’s a good idea to aim higher than that for more robust data.
Most usability testing platforms process findings using frequent statistics. When you have more users participating, there’s less room for massive deviations in the results. Outliers will have less impact on the overall numbers. In short, the larger the sample size, the more accurate and statistically significant the data and reports are.
As a result, you should calculate the minimum sample size for each of your tests before you begin testing. In this manner, you can only begin evaluating test findings once your test has reached the minimum sample size.
How to stay away from small sample sizes
As we mentioned earlier, we suggest a minimum sample size for each usability test. However, you shouldn’t stop there. It’s important that you determine the sample size that will give your team robust data with minimal deviations and outliers. Once you have determined your sample size, be mindful to not assess data until all participants have completed testing.
-
Length pollution
Timing is another key element of your usability testing process. After you have gathered a statistically robust sample, you must figure out the delicate balance of timeframe for users to complete the test in. If the test is too long, you run the risk of testers zoning out towards the end. If the test is too short, you run the risk of testers not being able to share their full thoughts and feedback on the design.
How can length pollution be reduced?
We suggest setting your test time for a minimum of 30 minutes, giving users plenty of time to complete it. As well, it’s a good idea to run multiple iterations of the same test over a certain period of time, in order to be sure that you are gathering enough feedback and data.
Even if your sample size is acceptable, if your test length is short, your test results will not accurately represent the time that it takes to navigate your UX.
-
Data pollution caused by outside factors
There’s a potential that some external factors will influence your data when you execute a usability test. We don’t always have the luxury of conducting experiments in a sterile lab setting.
Some of these external factors can include personal biases, comparisons to other similar brands and products, among other things. While not much can be done to stop these factors from influencing your results, we do have a few tips that can help.
How to reduce data pollution from external factors
It is difficult to manage data pollution caused by external forces. You’ll need to:
- You should run usability testing for a set window of time. While it’s important that you don’t make the timeframe too small, the longer you conduct a usability test, the more external influences will have an impact on your results.
- Keep an eye on your competition to see if they’re running any special promotions that could affect your testing results.
Keep an eye on the entire market to ensure that extraneous variables don’t affect your statistics. External market condition impact test data has been seen in cases where big credit card breaches have been reported.
Conclusion
Make sure to keep these sample size mistakes in mind when carrying out the usability testing process. Also, go through the causes of these mistakes and make sure to avoid them. This will help you gather statistically significant data to help you improve your UX design!