Source: Nature

Category: Digital Media, Science

Please use the comments to demonstrate your own ignorance, unfamiliarity with empirical data and lack of respect for scientific knowledge. Be sure to create straw men and argue against things I have neither said nor implied. If you could repeat previously discredited memes or steer the conversation into irrelevant, off topic discussions, it would be appreciated. Lastly, kindly forgo all civility in your discourse . . . you are, after all, anonymous.

Unfortunately, I see research that falls into the lower left bars all too often. Statistics and significance measurements have traditionally been poorly applied in the biomedical sciences, in my opinion. Even IF an experiment has incorporated the proper method & study design, the qualitative result should not take a backseat to the quantitative validation of that result. Now it has moved beyond oversight, where corporate R&D willfully does this.

For example, as the consuming public, we might be told a clinical trial shows that drug X significantly improves on some outcome…risk of heart attack, ability to fall asleep, etc. That word significantly is lifted from the statistical context, and we are to take it to mean something more conventional. If said drug reduced the risk of heart attack over 1 yr by 12%, where the risk was 5% to begin with, we’re taking about a 4.4% risk under drug treatment. Is that a benefit the average person would judge to be significant, especially taking the cost & side effects of said drug into account?

“It is almost impossible to drag authors away from their p-values, and the more zeroes after the decimal point, the harder people cling to them” John Campbell – referenced in the paper

LOL

I just had a conversation like this with a junior engineer last week. My comment to him was that the more numbers after the decimal place in a calculation given to me to review, the more detailed I make my review because it shows the person doesn’t understand accuracy and precision of the input data at a high school science lab level. I have found that this is a warning sign of other equally fundamental errors, sometimes dangerous errors that could cost lives if constructed.

30 years ago I was working on some forensic analysis where we made a fundamental discovery about some material behavior. We got very, very high correlation statistics (essentially perfect) for this behavior in the relatively large data set that we had. As a result, we initially didn’t believe the discovery, because in my field we never, ever get statistical precision anywhere close to this level. We spent two days trying to find where we had made a fundamental error in our analysis , e.g. reporting an identity, instead of a function with independent variables. However, it turned out to be a real effect which was gratifying and we were able to publish. Our papers on this are still being cited today.

This is a Bayesian approach. The Frequentists completely ignore the priors (the first row in your chart). I consider myself to be a Bayesian.

Here is a good description if you are interested.

http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf

Thanks!

Bayes comes up late in the article and in some of the ocmments.

We live in a Bayesian world, whether we want to admit it or not.