Implementing an ab testing tool is the painless step in embracing Conversion Rate Optimisation (CRO). Perhaps you have a new shiny ab testing tool implemented on your site and you don’t know where to start. Or maybe you’ve heard everyone else is doing it (not to mention the awesome benefits), and you want to kick off the conversation with your team/boss.
Whatever tool you have chosen or you are exploring, it is important to understand that it is just an enabler. Without people and process, the tool is useless. So here is a quick guide to Conversion Rate Optimisation (CRO).
What is AB Testing?
Serving multiple versions of a web page or experience at the same time. This allows you to:
- Test hypothesis and new ideas
- Quantify your decisions
- Measure website performance
- Provide insights about your visitors to feed into future features and designs
Why a Testing Culture is Important
Opinions are like noses…every one has one. Whether this is what features you should be developing or the look and layout of your platforms; ab testing challenges these opinions and lets your audience decide through their behaviour. By implementing the right processes you can continually optimise to make your product(s) better.
(For more detail please see my post How AB Testing and Product Development Work Together)
What to Test
Knowing where to start can be difficult and sometimes overwhelming. If you work on a website or mobile app that has thousands of pages/screens, where do you start? Firstly you should have some clearly identified (micro) KPIs for every page on your site, which impact your more high level (macro) KPIs, so you know which metrics to focus on page by page.
I always start by looking at the 2–3 highest volume landing pages and/or the top of a funnel. I measure the performance of the page(s) next to the relevant goals. How is the page performing? Could it do better? What elements of the page could be impacting the success of the page? From here you can begin to build out some test ideas.
Here is a few scenarios where you might want to run a test:
- A Hunch — when there is an element on a platform that you believe isn’t working and you want to change it. Having a hunch is fine and is a good basis to run a test, but it’s important to back up your hunch with data, to make sure you’re ab testing the right page(s)
- Data — when you spot a problem in the data. For example, you notice that a high drop off from step 1 of the sign up funnel. The data tells us WHAT is happening, but it doesn’t tell us WHY. This is where you create a hypothesis and then test it
- A New Feature — when a new feature is developed, there should always be clear KPIs, so you know exactly what metric(s) you are trying to impact. You can use this KPI to run tests and let your users decide whether it is the right decision or not
- Iteration — sometimes when you run a test you will see results you are unable to explain. This might create a new hypothesis which you might want to test further, to find answers
AB Testing Process
When running multiple tests, it can get messy. Here is 7 steps to consider to keep your ab testing program as effective as it can be:
Every test MUST have a hypothesis, so you establish why you’re running the test in the first place and then it can be proved or disproved. A hypothesis should have a problem, solution and result
- Create variations
Create as many variations for test as makes sense, based on the assumptions you have made in your hypotheses and the traffic volumes you have available
How are you going to track your goals? What custom events do you need to implement in your testing tool and/or analytics tool?
- Audience Segments
What segments do you want to test on? For example, is there an specific country or mobile device you want to target?
Get internal approval for the changes
3. Build & QA
- Using your preferred ab testing tool, build the test inside the UI.
- QA every variation in all the major browsers
- Push the test live
- Review the results of the test in your chosen ab testing or analytics tool
- Don’t forget to segment your data. You may see zero uplift in the data when looking at the high level data, but when you segment by – for example – traffic source / device / country, you might see an uplift. Don’t treat every user the same
- Make recommendations off the back of the data:
- Is there a clear winner? If so, should you roll out the new experience or run an iterative test to build on the success of the test and try to drive even greater uplift?
- Is the test generating more questions than answers? This is ok. These questions may point to more assumptions that you want to test (even if you don’t see a clear winner, this could still apply)
- Record test results somewhere organised and accessible.Test results are another source of data insights, like some one logging into an analytics tool to find insights
Below is an infographic that helps to illustrates the process above for running ab tests, designed by Seb Wals
- It needs planning and sound methodology
- It needs a hypothesis, so you have something to prove or disprove
- It should always be data driven
- It needs clear KPIs, so you can clearly measure success
- It should not have too many KPIs. Keep it simple
- When thinking about success metrics, also think about whether your test could cannibalise other KPIs on the platform — these may be worth measuring as well
- You can’t fail when testing — there is always insights
- AB Testing must be statistically significant before any results can be acted upon