A Quick Guide To A/B Testing

Implementing an ab testing tool is the painless step in embracing Conversion Rate Optimisation (CRO). Perhaps you have a new shiny ab testing tool implemented on your site and you don't know where to start. Or maybe it’s not new but no one in your organisation is using it. It might be that you've heard everyone else is doing it (not to mention the awesome benefits), and you want to kick off the conversation with your team/boss.

Whatever tool you have chosen or you are exploring, it is important to understand that it is just an enabler. Without people and process, the tool is useless. So here is a quick guide to Conversion Rate Optimisation (CRO).

What is A/B & MVT testing?

Serving multiple versions of a web page or experience at the same time. This allows you to:

  • Test hypothesis and new ideas
  • Quantify your decisions
  • Measure website performance
  • Provide insights about your visitors to feed into future features and designs

Why a Testing Culture is Important

Opinions are like noses…every one has one. Whether this is what features you should be developing or the look and layout of your platforms; testing challenges these opinions and lets your audience decide through their behaviour. By implementing the right processes you can continually optimise to make your product(s) better.

What to Test

Knowing where to start can be difficult and sometimes overwhelming. If you work on a website or mobile app that has thousands of pages/screens, where do you start? Firstly you should have some clearly identified (micro) KPIs for every page on your site, which impact your more high level (macro) KPIs, so you know which metrics to focus on page by page.

I always start by looking at the 2-3 highest volume landing pages and/or the top of a funnel. I measure the performance of the page(s) next to the relevant goals. How is the page performing? Could it do better? What elements of the page could be impacting the success of the page? From here you can begin to build out some test ideas.

Here is a few scenarios where you might want to run a test:

  • A Hunch - when there is an element on a platform that you believe isn’t working and you want to change it. Having a hunch is fine and is a good basis to run a test, but it's important to back up your hunch with data, to make sure you're testing the right page(s)
  • Data - when you spot a problem in the data. For example, you notice that a high drop off from step 1 of the sign up funnel. The data tells us WHAT is happening, but it doesn’t tell us WHY. This is where you create a hypothesis and then test it
  • A New Feature - when a new feature is developed, there should always be clear KPIs, so you know exactly what metric(s) you are trying to impact. You can use this KPI to run tests and let your users decide whether it is the right decision or not
  • Iteration - sometimes when you run a test you will see results you are unable to explain. This might create a new hypothesis which you might want to test further, to find answers

Testing Process

Below is a graphic that illustrates the process for running a test, designed by my clever friend, Seb Wals.

Share this Image On Your Site

Tips for Affective Testing

  • Testing needs planning and sound methodology
  • Testing needs a hypothesis, so you have something to prove or disprove
  • Testing should be data driven
  • Testing needs clear KPIs, so you can clearly measure success
  • Testing should not have too many KPIs. Keep it simple
  • When thinking about success metrics, also think about whether your test could cannibalise other KPIs on the platform - these may be worth measuring as well
  • You can’t fail when testing - there is always insights
  • Testing must be statistically significant before any results can be acted upon

Tags: 

Share