But with so much research now built around the concept, it’s unclear how — or with what other measures — the scientific community could replace it.
Theoffers a full 43 articles exploring what scientific life might look like without this measure in the mix.
And he offered the cutoff of P equals 0.05, saying “it is convenient to take this point as a limit in judging whether a deviation [a difference between groups] is to be considered significant or not.” That “convenient” suggestion has reverberated far beyond what Fisher probably intended.
In 2015, more than 96 percent of papers in the Pub Med database of biomedical and life science papers boasted results with P less than or equal to 0.05.
Smaller P values mean that the scientist is less likely to see a difference that large if no difference really exists.
In scientific parlance, the value is “statistically significant” if P is less than or equal to 0.05.
The goal is not to prove that the drug fights depression.
Instead, the idea is to gather enough data (eventually) to reject the hypothesis that it doesn’t.
It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure.
Achieving an experimental result with statistical significance often determines if a scientist’s paper gets published or if further research gets funded.