How To Create an AB Test Plan

Creating AB test plan on laptop

#1 – What’s your test hypothesis? 

Your hypothesis is the foundation upon which the rest of your test plan is built on. Without a solid hypothesis there is little point proceeding with a test. Tests are used to ‘prove’ or ‘disprove’ a theory – make sure you know what that ‘theory’ (or ‘hypothesis’) is. Here’s an example of a good hypothesis:

Including an additional call to action at the top of the basket page will improve the user experience by helping customers proceed to checkout and will therefore result in a significant lift in sales“.

#2 – Consider what platforms to test on

Do you really need to run your test on desktop, mobile and tablet? Is the test page responsive? If so, then you’ll need to consider the impact of each variation of your test on each breakpoint. Alternatively you might choose to only run the test for certain break points.

#3 – Define your segmentation

Do you want to track new vs returning visitors in order to measure any differences in behaviours? Or perhaps you’re more interested in seeing how desktop users compare to tablet users. Segments need to be considered in the planning phase if you want to run ‘up front’ segmentation rather than having to do complex post test analysis using data exports.

#4 – Exclude low traffic browsers

Make life easier for your developers and QA teams by excluding browsers which receive low traffic and are likely outdated (and therefore more difficult to build tests on). These will typically include the older IE (Internet Explorer) browsers such as IE6, 7 and 8, but may also include less well known browsers such as Opera. An alternative, and arguably superior, approach to an ‘exclude’ rule is to simply define browsers you want to ‘include’ in the test. This removes the possibility that someone might view your test on a browser that wasn’t considered.

#5 – Be radical!

If you choose variants that are fairly similar to each other you’ll likely need to run the test for an extended period to reach statistical significance. To give yourself the best chance of observing key differences between your experiments ensure you are always looking for variants that contrast with each other as much as possible. For example you’re much more likely to see an impact when changing a button label from: ‘Buy Now’ to ‘Add to Basket’ rather than changing it to ‘Buy This Now’. Similarly when testing colours you’ll see more dramatic results when testing distinctly different colours rather than a dark shade of green against a slightly less dark shade of green.

#6 – Track what’s required

With the flexibility that an optimisation tool can offer it’s often tempting to track as much as possible. This is especially true when a website is not tagged up correctly for analytics tracking. The key here is to track what’s required – nothing more, nothing less. Typically this will include the funnel steps from the test page through to confirmation in addition to a handful of clicks. Anything more than this will usually result in project delays – i.e. additional build and QA time as well as often needing to run the test for a longer period to reach statistical significance for each conversion point. More often than not when the list of tracking requirements is long most of the metrics will be ignored once the test is live. One tip when choosing metrics to track is to refer back to your hypothesis and ask yourself how tracking this particular metric will help you prove or disprove your hypothesis.

#7 – List key QA scenarios

This is especially important when testing on dynamic pages such as search results pages. It’s much easier to send your quality assurance (QA) team a list of predefined  scenarios to follow than using a haphazard approach. Using a methodical approach is much more likely to help identify bugs and potential oversights and it also makes it easier to document what has been checked.

#8 – Map out page flows

It may sound obvious but it’s not uncommon that developers and QA look at a detailed test plan and realise the most basic thing has been missed, namely the location of the test page and more importantly how to reach it. Include page flow diagrams in your test plans to make it clear how to reach the test page and how to trigger each of the conversion points (both page conversions and click conversions).

#9 – Clearly state your primary metric(s)

Whether it’s an AB test or an MVT (multivariate test) there should be a clear primary and secondary metric associated with any well thought out optimisation plan. It’s common to default to ‘Confirmation’ as the primary metric for all tests but unless you’re testing deep in the funnel or giving away a promo code it’s unlikely you’ll have a significant impact on confirmation. Instead, aim to increase lift to the next step in the journey. If testing on a product page this will either be clicks on ‘Add to basket’ or visitors reaching the basket page. If testing on the carousel on a homepage it might simply be getting more people to interact with the carousel. You can of course have a secondary, or additional key metric which will more often than not be ‘confirmation’.

#10 – Grab screenshots!


Once the test has been built and passed QA make sure you get screenshots of each variant, including the control. At this stage in the process you may well know the variants by memory but it’s much easier to communicate to key stakeholders what is being tested when you can clearly show them the different variants rather than trying to describe them – after all, a picture is worth a thousand words!

 

Published on April 19th, 2017

Get an invitation to join my private list where I’ll share my CRO experience & answer your questions

Author: Phil Williams

Phil is the founder of CRO Converts. He has had the opportuntity of creating successful testing and personalisation strategies for many of the UK and Europe’s leading brands.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn
No Comments

Post A Comment