These things are all important, of course. But the solutions are fairly straightforward, and when you reach a certain level of experience and skill, they tend to be a given.
No matter how data-driven you try to be, decisions are still always people-driven.
What’s a Logical Fallacy?
Whether in the boardroom or your own mind, they can inhibit you from making clear and accurate decisions.
Our friends at Amplitude” wrote a great post while ago> that outline fallacies that ruin your analytics, and it got me thinking about all the ways fallacies can diminish growth in general. This article will outline some of the most potent and dangerous logical fallacies we see time and time again. Learn them and mitigate their effects.
Note: this list only a touches on a few of the common ones. You” can find many more here.>
1. Hasty Generalization
A Hasty” generalization> is an informal fallacy where you base decisions on insufficient evidence. Basically, you make a hasty conclusion without considering all variables – usually this is based on a small sample size and impatience.
Wikipedia gives” the following example:>“if a person travels through a town for the first time and sees 10 people, all of them children, they may erroneously conclude that there are no adult residents in the town.”
This is where you may implement a test, and in some crazy circumstance, the results are incredibly lopsided. It seems obvious that variation B is getting its butt kicked. But if you %E2%80%9Ckeep” calm and test on> you’ll notice that the trends even out and Variation B may even end up winning.
You may have heard this as the Law” of small numbers>, where one generalizes from a small number of data points.
This is obviously dangerous when it comes to A/B testing, but it can also sway other decisions in optimization. For example, in pursuit of qualitative research, you may collect 1000 answers from an on-site survey.
It takes a long time to read through all the answers, though, and 12 out of the first 18 said something about shipping concerns. With this limited sample, you’re likely to prioritize shipping as a problem area, even though the rest of the responses may well render this a minor issue – worthy of consideration, but not prioritization.
Bottom line: realize small datasets can be deceiving. Extreme trends at the beginning tend to balance themselves out over time, so be patient and run your tests correctly. As Peep” once said>, “You can’t test faster just because you/your boss/VCs want to move faster – that’s not how math works.”
Related fallacy: Slothful” induction>. That’s when you never deliver the rightful conclusion to a dataset, no matter how strong the data/trend. If you run a test well, and you have a winner, take the win and move on.
2. Appeal to Authority
“Fools admire everything in an author of reputation.”
― Voltaire, Candide
According” to carl sagan>, “One of the great commandments of science is, ‘Mistrust arguments from authority.’…Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else.”
So basically, when you’re prioritizing hypotheses and your best case argument is that “Neil Patel said” testing fonts matters most you might want to look into a new career path.>
I’m joking, of course, but there are a few very real problems with putting too much faith in gurus:
- Your website is contextual.” the experiences of one expert may conflict with another and you derive zero value from any their tactics.>
- It’s easy for people to %E2%80%9Cplay” a doctor online.> By that I mean give someone a copy of You” should test that>, an internet connection, WordPress, and through some magic alchemy, the world might welcome a new conversion rate optimization blogger. There’s lots of good info out there, but just be careful where you get your advice.
- Being swayed by authority implicitly limits the scope of your testing program by inhibiting ‘discovery.’ For a good read on how authority can limit efficiency, read” this article on quantam team management>.
Another common iteration where you’ll see this logical fallacy is when someone brings up a large company as a golden example.
In the conversion optimization world, you’ll often hear Amazon, Airbnb, or Facebook invoked as someone to copy “because obviously they’ve tested it.” However, this appeal to authority is really used in every realm. Take this example from a Wordstream” post>:
As the author” put it>, “Just because eBay is a big company with a big marketing budget doesn’t mean that whoever’s in charge of their PPC knows what they’re doing. Big companies – whole empires even! – fail all the time. The history of the world is a catalog of failures. Authority doesn’t equal competence.”
Bottom line: don’t rule out experts – they have experience and therefore their heuristics tend to be more reliable. But when prioritizing tests or making decisions, an appeal to authority by itself is not a valid argument. It limits the scope of your thinking and sways the conversation to the status quo.
Which leads in nicely to the next fallacy…
3. Appeal to Tradition
The Appeal” to tradition>, also known as “argumentum ad antiquitam,” is a fallacy where an idea is assumed to be correct because it is correlated with some past or present tradition. It essentially says, “this is right because we’ve always done it this way.
Changingminds.org” puts it in a rather scary way>, saying Appeal to Tradition is “where people do it without thinking and defend it simply because it now is a part of the woodwork. Familiarity breeds both ignorance of the true value of something and a reluctance to give up the ‘tried and true’.”
No doubt you’ve experienced this argument in your life, and if you haven’t experienced this at work: you’re lucky. Many company cultures are mired in allegiance to tradition and are hesitant to try new things – which is the antithesis of a culture” of experimentation>.
As Logically” fallacious put it>, “If it weren’t for the creativity of our ancestors, we would have no traditions. Be creative and start your own traditions that somehow make the world a better place.”
I like it. Let’s create a tradition of experimentation.
Bottom line: while it may be true that your company has a tradition of success, it’s not due to the tradition itself. Therefore, “because it’s always been done this way,” isn’t a valid argument (in itself) for continuing to do it that way. Build a culture of experimentation and value discovery instead of tradition.
Related fallacy: Appeal” to novelty>. This is the opposite, where things are deemed to be improved simply on the basis that they are new. In optimization, this often comes in the form of spaghetti testing, or testing random things just to introduce novelty. Again, novelty isn’t bad, but it can’t form the basis of judgement on its own.
4. Post hoc ergo propter hoc
If you’ve ever read %E2%80%99Candide%E2%80%99″ by voltaire>, you’ll remember the eminent professor Pangloss – the philosopher who believed that everything was as good as it could be because “all is for the best” in the “best of all possible worlds.”
A quote” from the book describes this post hoc fallacy perfectly:>
“It is demonstrable that things cannot be otherwise than as they are; for as all things have been created for some end, they must necessarily be created for the best end. Observe, for instance, the nose is formed for spectacles, therefore we wear spectacles.”
As does this scene from The” west wing>:
Just because one action precedes another does not mean there is a causal” link>.
Correlative” metrics> are a great starting point for finding optimization opportunities, but they need to be investigated. For instance, if you find visitors who watched a 30 second video convert better than those who don’t, it may be that the video helped assist the conversion – but it may be something else entirely.
So you run controlled experiments – A/B/n tests – to make” causal inferences>.
That’s why, while cohorts” and longitudinal analysis> can certainly give us some insight on the trends of our company, we don’t rely on them as an indicator of causation. If you implement a new value proposition on May 1st, and on June 1st you notice that conversions have dropped 15%, you can’t assume the value proposition caused that.
He was brought on as a consultant to analyze the effectiveness of the client’s advertising campaigns. They had been running newspaper ads in every geolocation right before major holidays – say Father’s day – and experiencing great success doing so.
Their conclusion, of course, was that the advertising caused the revenue spikes. As if by magic, after the ads went live, sales would spike. Post hoc ergo propter hoc.
Thing is, they never used a control.
Levitt proposed setting up a controlled experiment, complete with a randomized sample, where they advertised in some locations and didn’t in others.
The findings? The advertising simply made no difference at all.
Bottom line: correlation” doesn imply causation>. Just because visitors who do X convert Y times better, doesn’t mean X causes this. Just because you implement a change and see a result after time, doesn’t mean that change caused the result. The only cure” for this is to run controlled experiments even then be wary especially if you a highly seasonal business>
5. False Dilemma
You tend to see this, at least in rhetoric, as a political” device>. To prod you to their side, shifty leaders, writers, and orators will create a false dichotomy where the logic goes, “if you’re not on this side, you’re on that side.” “That side” tends to be pretty unsavory, so you’re forced into a position you don’t fully agree with.
Iterative” testing> is the name of the game. Just because you’ve drawn up an A/B test with a single variation, and it failed, doesn’t mean the hypothesis is misplaced. In reality, there are infinite ways to execute the strategy.
For example, you conduct a solid amount of conversion” research> and come up with a prioritized list of hypotheses. These aren’t silly things like “test button color.” You’ve done your homework and established areas of opportunity, now it’s time to conduct some experiments.
First on your list, due to extensive survey responses, user tests, and heuristic analysis, is beefing up your security signals on your checkout page. Users just don%E2%80%99t” seem to feel safe> putting in their credit card information.
So you test security logo vs. no security logo. The result? No difference.
The hypothesis may have been wrong, but the real question you should ask yourself is this: how many different ways are there to improve the perception of security on a page? How many different ways could we design a treatment addressing security concerns?
The answer: infinite.
Bottom line: don’t limit the scope of your testing program by creating false dichotomies. While gamification” is fun sites like wtw that pit a vs b against each other without showing the scope of ideas on table or previous iterations tend emphasize dichotomies in testing.>
To defeat congruence bias (and eliminate false dichotomies) Andrew” anderson> advises the following: “Always make sure that you avoid only testing what you think will win and also ensure that you are designing tests around what is feasible, not just the most popular opinion about how to solve the current problem.”
6. The Narrative Fallacy
The” narrative fallacy> is essentially when one, after the fact, attempts to ascribe causality to disparate data points in order to weave a cohesive narrative. It is the same as the Fallacy of a Single Cause in its simplistic attempt to answer the sometimes impossible question of “why.”
The Narrative Fallacy, named and popularized by Nassim Taleb, is everywhere. Once you start reading about it, you’ll start to become annoyed at how often you notice it in daily life.
For instance, you can’t read a Malcolm” gladwell book anymore> without seeing the simplistic cause/effect narratives, such as this” one on the connection between asians and their superior math abilities>:
“Rice farming lays out a cultural pattern that works beautifully when it comes to math…Rice farming is the most labor-intensive form of agriculture known to man. It is also the most cognitively demanding form of agriculture…There is a direct correlation between effort and reward. You get exactly out of your rice paddy what you put into it.”
The logic: Rice farming → superior math skills because (narrative fallacy).
Andrew” anderson> wrote an epic article for the CXL” blog on how narrative fallacy>, and other post-hoc evaluations, can wreak havoc on a testing program. He explained an exercise he used to do while consulting testing organizations:
Bottom line: stop trying to assign a why to your data. If you tested a new value proposition and it won, it could be for many reasons, including the one you tell yourself. But if you assume that it’s because “red is associated with urgency and our visitors need to have urgency to purchase,” then that narrative will affect (and limit) the future of your testing program.
The most impactful limitations of a testing program have nothing to do with data – they center around human behavior and the reality of opaque decision making. At the heart of this problem is the reliance on numerous logical fallacies and cognitive biases that most of us are hardly aware of.
If you do optimization – and especially if you’re in a management or team lead role – I think studying these and learning to mitigate them can improve the effectiveness of your testing program.
What has your experience been with logical fallacies? Have you seen any of the above in action? How did you deal with them?