A/B testing is a technique used in marketing, advertising, product/UX design, and web development to determine which version of a product, advertisement, or webpage performs better or is preferred by users.
While A/B testing can take many forms based on what’s being tested, it typically works by randomly dividing an audience into 2 groups, and exposing each group to a different version of the same product or content.
By performing A/B testing, organizations are able to directly compare outcomes (such as customer engagement metrics, conversion rates, usability or satisfaction scores) from 2 or more variations on a design. These comparisons can then be used to make data-driven decisions about which version to use with real customers for the most success.
What is A/B testing?
A/B testing is also known as “split testing” or “bucket testing.” Whatever you call it, this research method involves taking a sample of your audience and dividing it randomly into 2 groups, each of which is shown a different variation of the same design, product, or information.
For instance, if you’re working on a new product feature and have created 2 different designs for how it might look and work, you would have one group try out Version A, and the other group try out Version B. Both groups would be asked to perform the same tasks and evaluate the same elements on their respective versions.
As the 2 groups interact with the variation they’ve been shown, data is collected on how they behave, and how well each version performs. Depending on the goals of the study, this data can take different forms.
- It may be verbal feedback from the participants on what they liked and disliked.
- It could be performance metrics like time on task, click-through rates, or conversions.
- It could even be a combination of qualitative and quantitative types – for example, psychometric surveys that generate numerical user satisfaction scores based on participants’ subjective opinions.
The data collected from the A/B testing groups can then be analyzed to determine which variation was more successful overall.
When to use A/B testing
A/B testing can be used for a variety of purposes, big and small. You can even use it for testing different email subject lines, for example. More commonly, A/B testing is used to evaluate different offers on a landing page, different product pricing models, or different user flows through a digital product.
To give you an idea of the many situations A/B testing may be used in, here are some examples from a possible design cycle:
A/B testing two prototype versions before choosing which one to build
Imagine that you’re building a new feature, or redesigning an old one, and your team is torn between a few different design ideas. Rather than everyone fighting over their favorites, you can do prototype A/B testing to have users look at each different design option, and see which one comes out on top.
A/B testing a live website against a prototype of a redesigned version
Now imagine you’ve got a single final prototype version that’s been developed, and you want to make sure that it’s actually an improvement on the existing version on your live site. Run an A/B test to have some users give feedback on the version that’s live, and some on the redesigned prototype. If your redesign gets better reviews than the live version, you’re on the right track.
A/B testing a live website against your beta/staging website
Your designs have been handed off to engineering, and the new version is up on the beta website. Before you launch, you could A/B test the beta site versus your live site to validate that it’s ready to go. In case you catch something that people don’t like as much as the current live version, you can easily go in and tweak it before putting your V2 in front of real customers.
A/B testing two versions of a live website
For features, pages, and flows that are already live on your website, you can set up A/B testing studies to constantly experiment and optimize with different variations. This kind of A/B testing requires special tools or custom engineering work to enable “versioning” of your live site. Then, you can split groups of visitors between versions, and observe and compare the actual performance metrics.
Run your first A/B test FREE with Trymata:
What are the benefits of A/B testing?
Like all kinds of user research, A/B testing enables you to make decisions based on relevant data, instead of guesses and hunches.
Particularly when it comes to questions of design and communication, decisions between different variations are too often influenced by subjective factors. A boss who thinks customers will like the word “synergy,” a designer who prefers blue over orange – without a better basis for a decision, these can be deciding factors.
A/B testing allows you to push back against subjective opinions (including your own!), and make better-informed decisions based on real people in your target audience.
Improve user experience & impressions
By doing A/B testing, you can make let real user behaviors inform your design decisions. By comparing data about users’ performance on different versions of your platform, you can make UX modifications and upgrades that are guaranteed to provide a better user experience.
Even if you’re just A/B testing a simple landing page, advertisement, or marketing blurb, you can improve the way that users feel about those assets. Quick comparative impression tests will be able to reveal insights about your audience’s gut reactions, and what they take away from the images, wording, and offers they see.
Increase customer engagement
Everything about A/B testing comes down to comparing corresponding datapoints. If your goal is to increase engagement – whether that means number of clicks, session duration, usage of features, interaction with media assets – all you need to do is collect the right data during your study.
Then, you can compare the performance of your 2 versions on that datapoint. Evaluate which version generated more clicks, or led to longer sessions (or whatever metric you’ve chosen as your goal). When you’ve found your answer, you can starting using that version to foster higher engagement permanently.
Read more: Picking UX KPIs
Maximize conversions & revenue
Ultimately, the decisions resulting from A/B testing exercises will streamline your user flows and funnels so that more people make it to the end and convert. If you choose a section heading that appeals to more users, or a form layout that’s quicker to fill out, or a button color that gets more clicks, all of those optimizations will contribute to higher conversion rates.
Best practices for A/B testing
If you’re thinking about doing A/B testing, here are 6 important things to keep in mind to ensure success.
1. Set clear goals (and measure them)
Before conducting an A/B test, it is important to set clear goals for what you want to achieve. This might be increasing conversion rates, reducing bounce rates, or improving user engagement, for example. By setting clear goals, you can ensure that your A/B testing efforts are focused and targeted, and that they are measuring the right metrics to determine success.
Make sure that for the goal you choose, you also have a reliable way to measure it. For example, if your goal is to increase consumer interest in a new product, you might choose to measure the number of clicks to the product page, searches for the product, quote requests, or some other indicator that fits your use case.
Whichever metric you decide on, stick to it throughout the entire A/B testing process, so you can make reliable comparisons.
2. Keep it simple
When designing an A/B test, it is important to keep it simple. This means testing only one variable at a time, such as the color of a button or the placement of a form. By testing only one variable, you can be sure that any changes in performance can be attributed to that specific variable, rather than multiple variables at once.
For example, if you want to test whether blue or green call-to-action (CTA) buttons are more effective, you should keep everything but the button color exactly the same. If your blue button says “Sign up,” and your green button says “Get started,” then you won’t know if any difference in conversion is because of the color change or because of the button wording.
3. Test for enough time (or with enough users)
To ensure accurate results, A/B tests should be run for a significant amount of time, or with a large enough sample size of users. This will vary depending on the business, the metric being measured, and the A/B testing method you choose.
Generally, A/B tests using real traffic on a live website should run for at least a few weeks to get enough data. For A/B testing prototypes with usability testing, on the other hand, you should test with at least 20 users.
Read more: Sample size for user testing studies
Running tests for too short a period of time, or with too few users, can result in misleading or inconclusive results. For example, if you run a test for only a few hours and sees a significant improvement in performance, you might assume that the change is effective, when in reality the results could be due to random chance.
4. Randomize test groups
To ensure accurate and unbiased results, test groups should be randomized. This means that participants should be randomly assigned to either the control group or the test group, rather than being selected based on certain characteristics.
For example, if you are testing the effectiveness of two different landing page designs, you should randomly assign equal numbers of participants to each design, rather than selecting participants for each one based on factors such as age, gender, or location. If you’re dividing a list of pre-selected participants, this could be done with a randomly-generated numbering system. If you’re running your study as a usability test, you would simply set identical recruitment filters and screeners for each batch.
5. Test on a representative sample
When conducting A/B tests, it is important to test on a representative sample of the target audience. This means selecting participants who are similar to the target audience in terms of demographics, interests, and behavior.
For example, if you are A/B testing the effectiveness of an advertisement for men’s beachwear, the participants for your test should ideally all be men who like going to the beach. If half of your study participants are women, or people who never go to the beach, their responses to the advertisements will only decrease the clarity of your findings, since they are not part of your target audience.
6. Monitor results and iterate
Once an A/B test has been conducted, it is important to monitor the results and iterate accordingly. This will likely involve implementing the design variant that performed better during the test – or making further tweaks to improve performance.
For example, if you tested two different versions of a signup form, and found that the one with labels to the left of the fields performed much better than the one with inline labels, you should start using left-hand labels permanently for that form.
Once you’ve implemented that change, though, don’t consider yourself done. There are many more variables you could continue to experiment on and optimize. Now that the label positioning is settled, maybe you can start with a new A/B test to see how the wording on the submission button impacts performance.
Read more: Iterative usability testing for a better UX
A/B testing is a useful and valuable tool to improve the performance of key web pages, flows, and marketing activities. Whatever goals you have for your online presence, A/B testing is a sound strategy for investigating and identifying exactly how you can get closer to reaching them. It’s an easy, reliable, and repeatable method.
Want to get started with A/B testing for your mockups, prototypes, advertisements, landing pages, and more?