This Optimisation 101 series demystifies the art and science of conversion optimisation
What is A/B testing?
In a nutshell, A/B testing is a way of showing the users on your website two versions of a web page to see which one performs the best. The one that ends up with the better conversion rate at the end of your test is declared the winner!
How does it work?
In a standard A/B test, two versions are tested. Version 'A', the existing version which is more commonly known as the 'control', and version 'B' - the 'challenger'.
To compare two different web pages you need to show the two versions (A and B) to similar types of visitors at the same time. Traffic is split 50/50 and you measure how visitors interact with your new page in relation to your conversion goal. Your test ends when it's reached a point of statistical validity which proves whether your challenger version has outperformed the control or not.
The beauty of A/B testing is that you can test virtually anything on a website or in your marketing materials. For example:
- Calls to action (e.g. Sign up now, Free trial, Register today etc)
- Landing page copy, headlines and images
- Value proposition messaging
- Page layouts
- Button colours
- Email subject lines and layout
If you want to test more than one thing on a webpage and compare them all against each other at the same time, you'd need to run a multi-variate test instead. These are more complex to set up - you can read more about how they work here.
Why run A/B tests?
Marketers and digital product owners run A/B testing because it's often a relatively quick and measurable way to improve website conversion rates which in turn makes marketing spend more efficient.
Not only that, it’s a great way to learn about customer behaviour, carefully test new user experiences and continuously improve website functionality over time.
A/B testing gotchas
With A/B testing you need to ensure you give every test enough time to run otherwise you'll struggle to find a winner that is statistically accurate. This mitigates the risk of going ahead with a version of a web page that doesn't hold its performance lift when you put it live to all your website visitors.
Conversely, if you run a test for too long you may find you get skewed results because of too many other variables impacting performance on your web page (e.g. an increase in your brand awareness improving customer confidence in your brand).
With so many options to test, it's easy to make the mistake of taking an unstructured approach to A/B testing. It's worth spending time thinking about the things that will give you the biggest impact in terms of ROI, creating a structured testing framework and not being afraid to run test more extreme changes to get conversion improvements.