Lots of people on the internet are running a/b” tests>, can I just copy their winning tests? Let other people do the failing, I’ll just test (or implement) the winning stuff. Good idea, right?
Using other people’s A/B test results has been popularized by sites like WhichTestWon (now known as BounceX),%C2%A0and” blog posts boasting about uplifts of those tests not run too soon> – be skeptical of any case study without absolute numbers published). It’s an attractive dream, I get it – I can copy other people’s wins without doing the hard work. But.
The thing is – other people’s A/B tests results are useless. They’re as useful as other people’s hot dog eating competition results. Perhaps interesting to know, but offer no value.
Why? Because websites are highly contextual.
It’s almost never apples to apples
You sell Ferraris and I sell used clothing. There’s no overlap in target audience, mindset, cost etc.
Or maybe we both sell the same type of stuff, like food. But I am Whole Foods and you are Walmart. Again, there’s very little overlap here in terms of context.
Or what if we both sell the exact same thing at the same price, like Samsung TVs? In most cases it’s still not exactly apples to apples as the brand, traffic sources, relationship with the audience and many other things will be different.
Other people’s A/B tests are other people’s solutions to other people’s problems.
Your website has specific problems, not generic problems. Your job is to uncover” the specific problems your site has>, and build your tests accordingly. You shouldn%E2%80%99t” copy> other people’s sites (also a terrible idea), don’t copy their tests either.
But I’ve tested X across multiple sites with great success!
Yes, there things that work more often than not – but they work mostly because you’re testing them against stupid alternatives. Automatic image carousels, lack of value propositions and super long forms are just stupid ideas. (Yes, there are rare exceptions when they can work, as with everything in life).
So testing something okay against a stupid idea is likely to work. But it’s not the same thing as using other people’s AB test results.
What is interesting about other people’s A/B tests
I’m not saying you can’t learn from A/B tests other people have run. You can, a lot. It’s just not from the results.
What IS interesting about other people’s tests is the process they used, the analysis they did, the insights they pulled out of data.
- Tell me how you identified the problem you’re addressing
- What kind of supporting data did you have / collect?
- How did you pull the insights out of the data you had?
- Show me how came up with all the variations to test against Control, what was the thinking behind each one
- What went on behind the scenes to get all of them implemented?
Running A/B tests is just the tip of the iceberg, all the hard work that goes into it is what actually matters.
I agree with Andrew Anderson on this:
If you really wanted to see a site like WhichTestWon matter, then show the variants that didn’t win. Show multiple options for each outcome and show what the best option was? Give us a measure of the cost and give us the internal roadblocks that you had to overcome. Let us know if that outcome was greater or worse than others for that group and what they are doing with the results to get a better more efficient result next time. If you are interested in anything more than self-promotion, post the things that don’t work. Tell us how often something wins, not the one time it did win.
You can’t assume that what worked for other people will work for you. It doesn’t work that way. You have a highly contextual website with specific problems, so go and test highly contextual and specific solutions instead.
So next time you hear someone say “I got a 53% lift”, ignore that and ask them questions about stuff that really matters. Stop focusing on the ‘what’, and start exploring the ‘why’ and ‘how’ instead.