The p-value debate, started by the American Statistical Association (I wrote about it here), gained a lot of attention in the scientific community. Many people have commented on it. And the more I read, the more I got confused about what the correct way of inference from data should be. It took me some time to realize that the debate goes much deeper than what is touched upon in the ASA’s document. At the heart of it lies a fundamental philosophical divide between Frequentist and Bayesian statistics.
A recent exchange on this blog (in German though) made me see clearer, I think, where the point of “agree to disagree” lies. It starts with the following assertion, which both the Bayesian and the Frequentists accepts, and which is rightly denounced by the ASA:
A1: Outcomes of hypothesis tests à la Neyman-Pearson (NP) are often misinterpreted.
Sometimes you see even a stronger version:
A2: Outcomes of NP hypothesis tests (i.e., p-values) are often misinterpreted as posterior probabilities, which indicate the likelihood of a tested hypothesis.
Now that is, of course, a serious mistake because NP hypothesis tests are based on Frequentist statistics in which, unlike in the Bayesian paradigm, there are no probabilities associated with hypotheses (the null is either true or false).
But how can we do better? The ASA leaves us more or less hanging at this point. Presumably because their members are also divided in different camps. I, for my part, always thought that the solution to A1 and A2 is quite simple:
S1: If scientists apply a tool from frequentist statistics incorrectly, then we should educate them on the correct use within said paradigm.
Bayesians, however, seem to hold a different belief. They argue, based on A2, that:
S2: People should switch to a Baysian mode of inference, such that their preferred interpretation of statistical results matches the methods they use.
Unfortunately, Bayesians rarely articulate this explicitly, nor do they acknowledge that both solutions are justified. This caused my initial frustration with the debate. I thought to myself: I grant you A1. But hey, there is an easy fix. Why should I throw overboard all the neat statistical techniques I learned in college? Fortunately, now I know I don’t have to.