We’re always looking for bright minds!
See open positions

A/B Testing: Making Sure It Isn’t a Waste of Time

A/B testing became a rather popular buzzword in the tech industry. And for a good reason. It’s a great way not only to test your assumptions but to also ultimately create a product that your users will enjoy. Not sure what kind of pricing model will increase your revenue or which interface design will create the best user experience? Don’t just guess, test it and make a data-informed decision. But to get reliable results, A/B tests can’t be done spontaneously – they involve a lot of planning, analyzing and patience.

 

Do Your Homework

You should begin by asking yourself a few questions. Are there any webpages in your sales funnel that have loads of traffic, but the conversion rates for some reason are low? Do your key webpages have a huge bounce rate? At which step you’re losing your users? There’s no need to guess. Get your answers by using tools like Google Analytics, heat maps, user recordings, surveys, and user research to gather proper information. Start collecting the data that will also help you to create a hypothesis for the A/B test. For instance, calculate baseline conversion rate and decide upon the minimum detectable effect you could later use.

Once you have a clearer picture of the problem you’re facing, do a bit more research about it. Chances are, there are already researches or case studies about it. Maybe the problem you’re facing has been previously discussed and analyzed, and some of your answers are one Google search away? Of course, keep in mind that what works for others, might not necessarily work for you. So, think critically about the information you’ve found and how it can help you in running your own A/B test.

 

Define “What” and “How”

Just like with any scientific test, it’s essential to have a clear hypothesis that you’re going to test. And consider what you want to improve with this A/B test. For instance, you found that your users aren’t noticing a button because of its colour. Now think about your goal or, in other words, why do you want them to notice it? For many, of course, the main goal is to increase sales (a main macro conversion). But this doesn’t have to be the only goal – you may want to drive traffic, increase revenue or else. With that in mind, you can finally set a clear hypothesis, which might sound like:

“User recordings showed that the check-out button isn’t really noticeable and users can’t really find it, so changing its colour could help to increase attention and improve conversions.”

This clear hypothesis will not only help you to establish the problem but will also help you to figure out what and how you’re going to test. In this case, you would be testing a single element – the colour of a button. Which means that all the elements will be identical, except for this one. However, you don’t have to limit yourself to just one element, or a single test variation. Google famously tested 41 shades of blue for a button to see which one brings the best conversion rate.

Tests with a single element are definitely simpler and more accurate. You understand what and why works better, and you can use that information for future decisions as well, although it will be more time-consuming. However, testing several elements will show you “the winner”, but it won’t give you a clear understanding of why it works the best unless you run a multivariate test (MVT). It’s a bit more complex A/B test where you test 2 or 3 elements by creating all the possible variations of them to understand which one and why works the best. So it’s important to understand what kind of information you want to gather from the A/B test, to pick the best way to do it.

 

Leave No Room For Mistakes

One of the most crucial steps is checking if everything is properly working before you actually start your A/B test. Make sure all the links to drive traffic properly work, test your product and debug it, so it wouldn’t negatively affect the A/B test. Everything has to work smoothly on all devices, all browsers in all situations. Check whether the data is properly tracked, all the pixels are set up and that you’re getting all the information. Also, don’t forget that you need to drive traffic to both original and test variation, to get reliable results. So do proper quality assurance test before starting your A/B test.

 

Concluding Your Test

When do you end the A/B test and how do you know whether the results are truly reliable? This is where it gets statistical. To make sure that your changes are really performing better than the original, you need to reach statistical significance. It’s a measure of how sure you can be that your A/B test isn’t a fail. Usually, an 80% level of significance is used, which means you can be 80% sure that your test is successful. To be more precise, you can aim for a higher percentage, like 95% or even 99%. Based on the level of significance you want to reach, you calculate the adequate sample size. This will show you the number of visitors you need to reach to be more certain about your results. Usually, it’s recommended to run your A/B test for 2 full weeks, but if the results are still unclear, it can last 3 weeks as well. However, make sure it’s not longer than six weeks – otherwise, there will be too many other variables affecting the data.

With all the data gathered, it’s time to analyze and interpret it. Use the minimum detectable effect to see whether the conversion rates that the test show, can be really true. Think about outside factors that would have affected the results in some sort of way. It might have been the holidays, an undetected bug that was discovered later or even the changes you’ve decided to test, where too drastic and drew attention for different reasons. Also, don’t forget that 9/10 times you may get negative results, but this doesn’t mean that the test was a fail. These negative findings of what doesn’t work can also be really helpful in your further product development. When analyzing and interpreting data, you need to find the balance between solely trusting the numbers and finding logical explanations of why you got the results you’ve got.

 

A/B tests are one of the easiest ways to test your assumptions, but this doesn’t mean it’s simple. It requires time, dedication and enough resources to run a reliable test. But with enough preparation and practice, over time it becomes less complicated and as a result, you know that you’re really building products that are successful.