Why Most Published Research Findings Are False

Untitled 

I just came across an interesting article ("Why Most Published Research Findings Are False", by John Ioannidis) that talks about how lots of research that seems to find statistically-significant results ends up being wrong. Here's the article's summary that describes what it covers. Note that this only applies to cases where a conclusion is based on a statistical analysis of data. It doesn't apply to research in areas like math where everything has a proof.

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

In addition to explaining why most claimed research findings are false, this article lists five corollaries to its main claim. These aren't quite a rigorously supported as the main claim, but they're interesting because they seem to explain some of the less-accurate-than-we'd-like-it-to-be data that we often see in the field of information security.  

  1. The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
  2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
  3. The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
  4.  The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
  5. The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

So even if it's frustrating to deal with the lack of accurate data that field of information security seems to be stuck with, it's somewhat reassuring to see that we're not alone in having this problem.

Leave a Reply

Your email address will not be published. Required fields are marked *