~1000 words, ~5 min reading time
Those who have heard me talk about statistics have probably heard me talk about how great the overall Bayesian approach is when compared to the more commonly used frequentist approach. I’m not going to give a full defense here. Rather, I’m going to focus on an adjacent topic: my impression that people are what I’m going to call “natural” Bayesians when faced with arguments where there is uncertainty. It turns out that this idea has the possibility of explaining a few observations about how people interpret statements about evidence – specifically in ways that are traditionally considered fallacious, but which can easily be explained using Bayesian reasoning. So, let’s get to the examples!
Example 1: There’s no evidence that…
One point that scientists sometimes say is “There’s no evidence that A”. Typically when this is said, what the scientist *means* is that there haven’t been good enough studies yet – or that the studies we have were inconclusive at this point. So, A may or may not be true. “There’s no evidence that A” is just a stand-in for saying “We don’t actually know about A.”
However, that doesn’t seem to be how a lot of people interpret the phrase. Instead, people take “There’s no evidence that A” as evidence AGAINST A. Why do this?
Because people are natural Bayesians. Let me lay out Bayes’s theorem, as it would be applied in this case.
P(A| there is no evidence for A) = P(there is no evidence for A|A)P(A)/ [ P(there is no evidence for A|A)P(A) + P(there is no evidence for A|not A)P(not A)]
[English: the probability of A given that there is no evidence for A is equal to the probability there is no evidence of A given that A is true times the prior probability that A is true divided by that same thing plus the probability there is no evidence that A is true given that it’s not true times the prior probability that A is not true.]
With a little bit of algebra, you’ll end up with this result:
P(A| there is no evidence for A) < P(A) iff P(there is no evidence for A|not-A) > P(there is no evidence for A|A)
Translating into plain English: As long as I think the probability of not finding evidence for A is higher if A is not true than if A is true, then I will interpret the absence of evidence in favor of A as evidence AGAINST A. Basically, if A is true, I expect it to leave evidence. If I can’t find evidence, then that makes me realize it is more likely that A is just not true.
Example 2: Bad arguments for are arguments against
Classical logic tells us that just because someone makes a bad argument for A, that doesn’t mean A isn’t true. However, by Bayesian probabilistic reasoning, it almost certainly increases the *probability* that A isn’t true. Here’s why, modifying the previous formula:
P(A| someone used a bad argument for A) < P(A) iff P(someone used a bad argument for A|not-A) > P(someone used a bad argument for A|A)
Put another way: as long as we think people making bad arguments supporting A is more likely if A is not true, then bad arguments in favor of A actually provide evidence (though inconclusive evidence!) that A is not true. Put another way: if people can’t make a good argument for A, then that means it’s pretty likely that A isn’t true.
But… people are bad at math
Now, let’s get specific with some math. Let’s say that I’m totally unsure about A being true. I think it’s a 50/50 chance that A is true. However, if A is true, I think there’s only a 20% chance that we wouldn’t find evidence for A. Meanwhile, if A is not true, I think there’s a 95% chance that we won’t find evidence for A. (I mean, sometimes we find evidence that seems to support something even if that thing isn’t true.)
Turns out there’s no evidence for A. So, what is the probability that A is true once I learn that?
My guess is that very few people are going to get this right, even though the math isn’t that hard. A lot of people will say “There’s a 95% chance that we won’t find evidence for A if A isn’t true. Since we didn’t find evidence for A, there’s a 95% chance A isn’t true.”
The math shows this is wrong. The true probability is about 82-83%.
In brief: people apply Bayesian reasoning in informing their beliefs, but do so imprecisely.
Practical Implications
The reality that people are natural Bayesians (but bad at math) suggests we need to be careful how we communicate. For example, if we don’t know one way or the other about “A”, then “I don’t know” is a *better* statement than “There is no evidence for A” (even if the latter is technically true – perhaps because we’ve not done the study yet). Or, tell people your priors. If you think A is likely based on your intuition, but you’ve not yet gathered data, say so. “I suspect A is true, but we’re still gathering data that might change that.” Save “There is no evidence for A” for cases where you want to suggest that “A” isn’t true.
Similarly, if you think A is true but only have a bad argument supporting it (maybe you’ve just not thought about it much), then, rather than make the bad argument, just say what your impression is and that you’re still thinking about it. You are, in fact, allowed to not know things.
Why do I write this? Because I’ve come to realize that we live in a world that asks us to take stands on things that we can’t possibly know with any kind of certainty. In the absence of certainty, Bayesian reasoning (however bad we may be at the details of it!) comes in. It’s a good idea for us to be aware of this fact, and to communicate in a way consistent with it.