A/B Split testing allows you to analyze two landing pages and determine which one is more effective for your campaign.
Example: Looking at the data of your landing page, you observe that the user is only on it for an average of 7 seconds.
You come to the conclusion that creating a more compelling headline will get your targeted market to read the content and spend more time on the page, ultimately leading to more conversions.
With split testing, you’re able to accept or reject your hypothesis.
These are the steps to doing a split test from beginning to end.
- Your reason for split testing should be data-driven.
Why are your visitors only spending an average of 7 seconds on your page before leaving?
- Then create a hypothesis
Ask yourself, “Would improving the headline get a better conversion rate?”
- You’ll need to get what is called a “statistical significance” from your sample size. This is the number of visits each of your pages will need to get before you can be confident about your results.
The accepted level of significance is 95% for conversion optimization. A 95% level of significance means you can be 95% sure that the change in your conversion rate is the result of improving your landing page. Essentially this means at the conclusion of your test, there’ll only be a 5% chance that your results are due to chance.
There are tools that can help you calculate sample size – Optimizely’s calculator is exactly for this. Here’s what you’ll have to input for it to give you an accurate sample size:
Statistical significance: Optimizely’s calculator allows you to raise or lower this, but it’s not recommended you use anything under 95%.
Baseline conversion rate: The conversion rate of your original (control) page.
Minimum detectable effect: Basically this is the least acceptable conversion rate change. 20% is the minimum detectable effect which means, that at the end of the test, the conversion rate must be higher than 20% for it to be a result of your changes.
Once you’ve got your sample size…
- Make your adjustments
Change your headline, but make sure your original landing page remains the same. Otherwise, your baseline for testing won’t be accurate.
- Eliminate adverse variables
Make sure traffic sources and referring ads are the same for both pages and that other variables that could result in a misleading outcome are eliminated as much as possible.
- Make sure everything is working
Examine everything before your test goes live. Make sure your landing page looks the same in every browser, your CTA button is working and all the links in your ads are correct.
Run though every aspect of your campaign to ensure nothing threatens the accuracy of your results.
- Drive traffic to your pages
Be careful where your traffic is coming from and make sure it’s coming from the same place.
Don’t send promotional traffic from your email list to a page that you’re running a test on. This is because people who subscribe to your list like you way more than your average visitor. And you don’t want to optimize your page to this loyal traffic and get it wrong.
Once you’ve picked your traffic sources, keep running your test until you’ve hit the sample size you got in your pre-testing calculations for both pages (original and control). If you hit that number in less than a week, keep running the test for at least a full week. This is because there are days your visitors will be more responsive to your marketing messages than other days, this will have a significant impact on your conversions.
When you hit your sample size it’s time to look at the results.
- Analyze and optimize
Look at how your variation did. If your headline change produced a good lift for your landing page, great! But, it doesn’t mean it’s the best it could be. Keep testing. No campaign is perfect.
And if it didn’t produce any lift or the variation was worse, it doesn’t mean you failed. You just found something that didn’t adversely impact conversions on your page, move on and keep testing.