From Betsey Stevenson & Justin Wolfers, a short primer on separating lies from statistics:
 

1. Focus on how robust a finding is, meaning that different ways of looking at the evidence point to the same conclusion. Do the same patterns repeat in many data sets, in different countries, industries or eras?

2. Results that are Statistically Significant means it’s unlikely findings simply reflect chance. Don’t confuse this with something actually mattering.

3. Be wary of scholars using high-powered statistical techniques as a bludgeon to silence critics who are not specialists.

4. Don’t fall into the trap of thinking about an empirical finding as “right” or “wrong.”

5. Don’t mistake correlation for causation.

6. Always ask “so what?” The “so what” question is about moving beyond the internal validity of a finding to asking about its external usefulness.

Great stuff. I recall something from Carl Sagan on this — I’ll see if I can dig it up.
 

 

Source:
Six Ways to Separate Lies From Statistics
By Betsey Stevenson & Justin Wolfers
Bloomberg View, May 1, 2013   http://www.bloombergview.com/articles/2013-05-01/six-ways-to-separate-lies-from-statistics

Category: Bad Math, Data Analysis, UnScience

Please use the comments to demonstrate your own ignorance, unfamiliarity with empirical data and lack of respect for scientific knowledge. Be sure to create straw men and argue against things I have neither said nor implied. If you could repeat previously discredited memes or steer the conversation into irrelevant, off topic discussions, it would be appreciated. Lastly, kindly forgo all civility in your discourse . . . you are, after all, anonymous.

11 Responses to “Six Ways to Separate Lies From Statistics”

  1. bigsteve says:

    I studied Statistics along with other maths in College. And use them in my profession. I also have to time to time explain something technical to nontechnical groups. If the expert cannot explain something complex to people of average intelligence in a way they can understand , he or she either does not know what they are talking about or they are trying to flimflam you. I am going to copy and save these six rules. They are excellent.

  2. bear_in_mind says:

    I believe you’ll find many relevant references in Sagan’s excellent, “The Demon-Haunted World: Science as a Candle in the Dark” published in 1995, the year before his death.

  3. bear_in_mind says:

    Forgot to add this — you also might enjoy this segment from Science Friday with Ira Flatow interviewing Walter Isaacson on how critically important it is to be educated in BOTH science and humanities. Good stuff:

    Science Friday
    May 23, 2014
    http://sciencefriday.com/segment/05/23/2014/why-science-and-the-humanities-are-better-together.html

  4. William_H says:

    My economics advisor exhorted, “Economists use statistics like drunks use light poles–for support rather than illumination.”

  5. [...] per our earlier discussion, here is Carl Sagan. He argues having a finely honed bullshit detector isn’t merely a tool of [...]

  6. Crocodile Chuck says:

    ‘Great stuff’; seconded.

    I’d add a 7th: cui bono? Who is benefiting from this? Who funded the study/research? What’s their ‘party line’?

    Here’s an embarrassing, local, glaringly obvious example: http://www.independentaustralia.net/business/business-display/gina-rineharts-battle-against-climate-change-science,4242

  7. rd says:

    I would add another rule:

    Is the analysis occuring within the overall limits of the empirical data set?

    Physics has a classic example where Newton’s Laws of Motion work great until they are extrapolated to very high speeds close to the speed of light where it blows up and Einstein got to make a name for himself. Also, as objects get smaller towards the molecular level, Newton’s laws get over-run by quantum physics. In both cases, Newton’s Laws work great within the overall range of data that people normally run into but there are limits.

    We see extrapolations all the time where a function is simply continued into the future with little thought about whether or not the basic assumptions inherent in the historic data set will still be valid.

  8. Mattw says:

    Concerning tail risk in finance, beware use of the Normal distribution. It should be outlawed in finance and violators shot. That pretty much means all tail risk calculations are garbage. In reality, finance follows more of a Power Law distribution with tails 10 to 20 times larger than the Normal distribution. That 1 in 200 years financial event is really something like 1 in 20 years.

    • rd says:

      I was always baffled by the typical MPT usage of just three-years worth of stock market data to predict volatility. Depending on the time period when the data was gatehred, it always struck me as doing fractal analysis of clouds on a nice blue sky day to predict the likelihood of a thunderstorm.

  9. spencer says:

    I’ve long given a great deal of weight to a saying supposedly from Samuelson.

    Any concept in economics that you can not explain to your mother-in-law probably will eventually be proven wrong.

Leave a Reply

You must be logged in to post a comment.