- Published: Dec 14, 2022
- Last Updated: Jun 8, 2023
- 6 min. read
WebFX TeamDigital Marketing Agency
- The WebFX team is made up of more than 450 subject matter experts in digital marketing, SEO, web design and web development, social media, and more. Together, they’ve helped WebFX’s clients earn more than $3 billion in revenue from the web — and that’s just in the past five years. @webfx
The last thing anybody wants is a false positive — no matter the test! With conversion rate optimization (CRO), a false positive could lead to a dead end.
When you conduct an A/B test, you want to know if your test page (B) really outperformed your control page (A) before you spend time tweaking forms, pages, or other aspects of your site.
An A/B testing analysis technique will help determine whether your test was statistically significant and if you have a winning set of data on your hands.
How do you know if your A/B test was statistically significant?
Statistical significance means you can repeat the test and consistently receive the same — or similar — results.
For example, let’s say you changed the placement of a call to action (CTA) button. The button used to be below your landing page’s content, but you hypothesized you’d receive more leads if you placed the button front and center. So, you tweaked the page and put the button above your main headline.
Next, you tested the former landing page — the control page — on 1000 visitors. From there, you tested the tweaked page on another 1000 visitors. After you measured your results with an A/B test calculator, it showed a 25% increase in clicks on your CTA button after you moved it up on the page.
If you repeat the test more than once and continue to receive around a 25% increase in clicks, you can reasonably assume your results are statistically significant. You can also assume the changed placement of the CTA button was the cause for the change.
How do you analyze your A/B test?
While A/B calculators aren’t the end-all-be-all of A/B testing analysis, they help you make sense of the data you collect.
Let’s say you own a small business. Your average number of site visitors ranges anywhere from 100–500 users per week. You ran a similar A/B test on your landing page’s CTA button — but on a sample size of 100 people instead of 1000.
Your results find that 50 out of 100 people clicked on your CTA button for the original page. On the test page, 58 people clicked on the button. While this increase may seem significant for the sample size, according to the A/B calculator, it’s a mere 16% change.
What should you do before you run an A/B test?
Before we get started with our example scenario, here’s a rundown of what you should do before collecting any data.
1. Know your objectives
What are your key performance indicators (KPIs)? In other words, what conversations drive the most significant results for your business? These indicators could be anything from sales volume to client retention. Focusing on changes geared toward your KPIs will drive the best results.
2. Know where you are — and where you want to go
Once you know what you want to measure, find out where you currently stand. You might already have a general idea if you’ve measured ad clicks, number of opened emails, or how many leads fill out your contact form per month.
How to know if your test is statistically significant (without using an A/B calculator)
Now onto the fun part — the A/B testing analysis. After you’ve followed the above steps, in addition to running your trials and collecting your data, you can determine if your results are significant enough to enact change.
The good news is you can still use an A/B calculator as a starting point. Out of the 100 website visitors for the small business we talked about earlier, there was a 16% increase in clicks due to moving the placement of the CTA button — supposedly. Now, how do we know if that’s true?
1. Rerun the test
Whenever you run an A/B test, if the change isn’t significant, you’ll want to rerun the test — for example, if the results are lower than a 36% difference in outcome.
In general, it’s best to rerun A/B tests so you’ll have sufficient data to conclude the outcome was the direct result of the change you made.
2. Analyze the data again
Let’s say that after this rerun, the outcome was that 60 people out of 100 clicked the test page’s CTA button, and 52 clicked on the original page. That’s a 15.38% increase, according to the calculator.
3. Run your test one more time (and analyze results)
This result is similar to the 15–16% change the third time around. However, before you jump to implement the changes, test your results one more time.
When you’re working with a smaller sample size, it’s difficult to know whether the consistent change is a coincidence or not. Typically, the larger the sample size, the more statistical power you have.
4. Run another A/B test — testing your control page against itself
Technically, an assessment like this would be considered an A/A test. Test your control page on one set of 100 visitors and test the same page on a different set of 100 visitors. You can do this before or after testing a change.
The natural variance in clicks on your CTA without any changes implemented shouldn’t exceed a 2% difference. This range accounts for the natural fluctuation in clicks your page receives. Any changes that exceed this range can be reasonably assumed as significant.
Now that you have sufficient data, it’s time to see if it checks three boxes. Determine if the data is:
- Sufficient: Yes, we ran the test three times.
- Consistent: Yes, we received a similar outcome three times.
- Differentiated: Yes, we tested our control page against itself and noticed natural changes fell between -2 and 2% — meaning our consistent 15% increase in clicks for our test page was significant.
Based on these results, we can determine the data is statistically significant. We can also reasonably assume the change in the placement of the CTA button was the cause for the increase in clicks.
Partner with WebFX for A/B testing analysis
As you can see, A/B testing analysis can be a daunting task, especially if you aren’t sure what changes will drive the best results. Our team of 500+ subject matter experts at WebFX uses data-driven SEO strategies to create impactful landing pages with our A/B testing services. We’ve driven more than $6 billion in revenue for our clients, and we’re ready to take your landing page to the next level.
The WebFX team is made up of more than 450 subject matter experts in digital marketing, SEO, web design and web development, social media, and more. Together, they’ve helped WebFX’s clients earn more than $3 billion in revenue from the web — and that’s just in the past five years.@webfx
WebFX is a full-service marketing agency with 1000+ client reviews and a 4.9-star rating on Clutch! Find out how our expert team and revenue-accelerating tech can drive results for you! Learn more
- How Do You Know if Your A/B Test Was Statistically Significant?
- How Do You Analyze Your A/B Test?
- What Should You Do Before You Run an A/B Test?
- 1. Know Your Objectives
- 2. Know Where You Are — and Where You Want to Go
- How to Know if Your Test is Statistically Significant (without Using an A/B Calculator)
- 1. Rerun the Test
- 2. Analyze the Data Again
- 3. Run Your Test One More Time (and Analyze Results)
- 4. Run Another A/B Test — Testing Your Control Page Against Itself
- Partner with WebFX for A/B Testing Analysis
Looking for More?
Get expert ideas, industry updates, case studies, and more straight to your inbox to help you level up and get ahead.
"*" indicates required fields