How To Design AB Tests for Low Traffic Sites – Pro Tips & Tricks
The design of AB tests for low traffic sites is and should be different to high traffic sites. There are some very specific considerations you’ll want to be aware of. High traffic sites have many luxuries which smaller less visited sites simply don’t have. I’ve run AB tests for low traffic sites for a number of years and have picked up a thing or two which should help you maximise your chances of success.
You don’t need to avoid running AB tests for low traffic sites – you just need to plan effectively and efficiently.
What is a ‘Low Traffic Site’?
I guess first off I should define what a low traffic site is. If you’re getting less than 1,000 unique visitors per month I would class it as a low traffic site. Of course, when it comes to AB testing it isn’t just about the traffic you’re getting – the sample size of your key metric is the main consideration (more on that later).
For retail sites another easy way to determine if you are a low traffic site would be to look at how many sales you’re making in a week. If this number is less than 30 we can safely say you’re in this group.
Go Easy on the Experiments
This first point is pretty simple. Don’t design a conversion rate optimisation (CRO) test with more than two or three experiments (or four as an absolute maximum). Every test you run splits your traffic between the different experiments. The more experiments, the more you dilute your traffic and reduce the impact you will have. Another way to look at it would be that if you have lots of experiments you’ll need to run your test for a very long time.
One low traffic client I worked with insisted on running a large test (8 experiments). The test went ahead despite my best efforts to avoid it and after running the test for 6 months (yes, you read that right) a decision was taken to end the test. After all that time we still hadn’t reached statistical significance – nor even come close.
Don’t Even Think About Multivariate Tests (MVTs)
Multivariate tests are by their very nature larger tests. Even the smallest MVT will result in four experiments, in which case you may as well just run an ABn test. The test example I mentioned earlier, which had 8 experiments, was actually an MVT. The client was looking forward to getting insights around what specifically was driving change on the page. Unfortunately they never got this far as their traffic levels simply didn’t support running this type of test.
Think BIG and Radical
Boring button tests and copy colour changes just aren’t going to cut it. If you are testing something that isn’t immediately obvious, such as a dark blue button vs. a light blue button then you’ll need a large sample size (i.e. a lot of conversions) to determine if there is any statistical difference between the two.
Try to think bigger and bolder. If it’s got to be a button test then go for the most different button you can think of. If your control is a small dark blue button with no hover effects then try a large red button with hover effects. This at least gives you a good chance of your visitors being impacted by the differences.
‘Test Candidates’ vs. ‘UX Improvements’
If traffic wasn’t an issue you could pretty much test everything. Unfortunately this just isn’t the case when running AB tests for low traffic sites. Instead you’ll need to be ruthless about prioritising what qualifies as a good test candidate. Sometimes you’ll need to make a call on if you can really afford to test something vs. just implementing it. This is where you’ll want to draw on any user experience knowledge you have in your team (or company). If something is widely accepted as good practice (such as reducing clicks to reach key content or improving page load performance) then you’ll be better off just implementing these changes and focusing your testing efforts on something less obvious.
Group Changes into Themes
Just because you’re running an AB test doesn’t mean you can’t change more than one element on the page. In fact AB tests for low traffic sites should usually include multiple changes on a page. The more things you change the more chance of seeing significantly different behaviour. This isn’t to say that you should just randomly pick off different things to change and throw them all into a test. If you do this, even if you arrive at a statistically significant result, you’ll have absolutely no idea which changes were responsible for the difference in conversion rate. Was it the button you changed? Or was it the copy? Or the carousel? etc. You get the point.
The trick here is to identify ‘themes’ or ‘subjects’. The theme you want to test might be colour related. For example changing multiple elements on the page to transform the page from a dark page to a light page. Or the theme might be layout related whereby the descriptive elements on a product page are moved to the left hand side and the visuals are moved across to the right. I noticed ao.com are using this unconventional layout on one of their product pages yesterday:
A Finance Case Study
A few years ago I was working with a large well known bank. They had low traffic on the particular microsite we were focusing on. Our goal was to design an AB test on one of their landing pages. We needed to use this principle of grouping changes together as we still wanted to end up with clear learnings at the end of the test.
So how did we do it? We examined the control page and noticed that the products on the landing page were being described in a very factual manner. i.e. by the names of the specific bank accounts available. We decided to test changing the whole page so that rather than listing product names we were instead grouping them into useful themes for the customer. The new version of the page used titles like: ‘Private Banking’, ‘Investing Your Wealth’ and ‘Entrepreneurs’ rather than the names of the accounts. This was well received by their customers and the insight gained could be applied across different areas of their site.
Site Wide Tests
If you’re faced with running AB tests for low traffic sites then you might consider running a site wide test. Or at least a multi-page test. This means you’ll be getting more traffic into your test than if you were to simply run the test on one page. Perhaps you want to test a new progress bar in your funnel. Rather than just testing this on one page test it across your full funnel. Not only will this be a better user experience it also means you’ve got more chance of seeing a significant change in customer behaviour.
Other examples would be testing a new header or footer, global navigation (menus) or font type. All of these AB tests could run across multiple pages and should therefore require less time to reach statistical significance.
Macro Conversions vs. Micro Conversions
In CRO, a macro conversion is usually the ultimate goal of your website. For most sites this is the sales conversion. In an ideal world every test would have confirmation as the primary metric – i.e. placing an order. Unfortunately when running AB tests for low traffic sites you’ll usually need to focus on micro conversions.
Examples of micro conversions are clicks on a homepage banner, product detail page views, clicks on add to basket, clicks on sign up etc. You’ll be getting a lot more volume on these micro conversions than you will on the macro conversions and for this reason they’re perfectly suited to low traffic tests. You could even track engagement on a particular page by grouping together clicks on multiple elements as one conversion point. This again increases your sample size.
You can of course still track your ultimate goal but I’d recommend you choose your primary goal to be something much higher up the user journey so that you give yourself a good chance of reaching statistical significance without having to run your tests for months on end.
AB Tests for Low Traffic Sites – Where to Test?
AB tests for low traffic sites should mainly focus on the earlier stages of the user journey. Once you’ve optimised the top end of the journey and are getting more traffic through to your funnel you can then start to focus on the latter stages.
Common places to test on low traffic sites are the homepage, the login page, product listings pages and product details pages. The basket page may also be an option. This is where web analytics is your friend. Check where your highest traffic pages are and focus your efforts on those. Once you’ve optimised those then review your analytics and move onto the next highest traffic pages.
Be Prepared to Wait a While
You’ll no doubt be excited about getting your test live. You’ll probably want to even monitor your AB test on a daily basis. This is absolutely fine and in fact good practice, but just prepare yourself for a long wait. AB tests for low traffic sites need longer run times. Don’t draw any conclusions in the first few weeks. Give it at least 2 weeks before you even make any comparisons between the experiments. This is hard to do when it looks like one experiment is tanking but give it a few days and you’ll probably see the results flip completely.
When sample size is low it only takes a few conversions to sway the result. Even if your AB testing tool says you’ve reached statistical significance in the first week or two just ignore this for now.
If you want to get an idea how long you will need to wait to reach statistical significance check out one of the many online test duration calculators.
What’s an Acceptable Confidence Level to Use?
You’ll no doubt have been told that you should be aiming for a 95% confidence level (also known as confidence interval) to prove if your AB test is statistically significant.
While this is true you may actually want to consider settling for a lower figure when running AB tests for low traffic sites. Time is precious and if you can declare a winner after 6 weeks based on a 92% confidence level rather than waiting for 12 weeks to reach 95% confidence then you may opt for this. At the end of the day two test results in a 12 week period based on a 92% confidence level could well be worth a lot more to you than one test result at 95% confidence over that same period.
Running Multiple Tests at the Same Time
If you’re low on traffic then you might be tempted to run multiple tests at the same time. High traffic sites will often have multiple tests running at the same time but they usually apply ‘exclusion rules’ so that no visitor falls into more than one test at a time. This ensures the results are ‘clean’ as no visitor is influenced by more than one test. Obviously this won’t be an option for a low traffic site and instead the question is really: ‘Is it OK to risk exposing customers to multiple tests in the same user journey?‘. It’s certainly not best practice but there are scenarios where user journeys are so far removed from each other that it may be acceptable to do this. For example:
- On the latter stages of a long sales funnel – where the second test is on the homepage
- On a landing page – where the second test is on a completely separate part of the site
- Logged-in Area – where the second test is on the non logged-in area
- UK page – where the second test is on a different country sub domain
Testing Options for Extremely Low Traffic Sites
If you’re wanting to run AB tests for low traffic sites but you find that they receive an extremely low amount of traffic you may need to explore some other options.
The following options may also be useful even if you are successfully running AB tests. They can supplement your test results and give you further insight into customer behaviour.
Usability testing doesn’t require traffic. There are many options available and you’ll gain some valuable insights. Just bear in mind that the results probably won’t be statistically significant due to the low sample size.
Consider asking your customers what they think of different aspects of your site and the products you’re selling. It’s easy to set up a survey. You could either ask people for feedback on your website or conduct the survey offline.
Heatmaps can be useful to see where people are clicking. Do they scroll down below the fold or do most people use the top navigation? Feed the insights into future AB test ideas and get them onto your AB testing roadmap.
Whether you’re using Google Analytics or some other tool don’t ignore your data. Use your analytics tool to see how people are behaving on your site and what needs improving. Do some pages have unusually high drop-off rates? If so, perhaps you need to check the call to actions on those pages are functioning correctly and are clearly visible.
Paid Traffic (PPC)
You may want to consider purchasing some pay-per-click (PPC) traffic (i.e. Google ads, Facebook ads etc.). This may only be for a short period of time to be able to run an AB test. If you go down this route just bear in mind that the type of visitors reaching your site through paid search may be very different to your normal visitors. It’s not a reason to avoid this option, just a consideration when analysing the test results.
Before / After Testing
If all else fails then you may want to go down this route. Before / after testing involves closely monitoring your web analytics data for a period (let’s say 4 weeks). You then go ahead an implement a change to your live site and monitor your analytics for the same period of time (4 weeks). At the end of this 8 week period you can then compare the conversion rates for the two time periods to see if there was a difference. I’d almost always recommend avoiding this type of testing as it is open to so many variables. This is especially true when faced with a low traffic site as you’ll still need to run the “test” for a significant period of time. Changes like seasonality, sales, marketing promotions and many others will likely influence your customer differently over these time periods resulting in unfair test results.
In summary, if you find yourself needing to design AB tests for low traffic sites then as long as you follow certain guidelines you’ll be just fine. Familiarise yourself with the points mentioned in this article and you’ll be well prepared to design efficient and effective tests which will yield actionable insights.
Author: Phil Williams
Phil is the founder of CRO Converts. He has had the opportuntity of creating successful testing and personalisation strategies for many of the UK and Europe’s leading brands.