Great list:

 


Source: Compound Chem

Category: Digital Media, Science, UnScience

Please use the comments to demonstrate your own ignorance, unfamiliarity with empirical data and lack of respect for scientific knowledge. Be sure to create straw men and argue against things I have neither said nor implied. If you could repeat previously discredited memes or steer the conversation into irrelevant, off topic discussions, it would be appreciated. Lastly, kindly forgo all civility in your discourse . . . you are, after all, anonymous.

6 Responses to “Guide to Spotting Bad Science”

  1. rd says:

    I would add two:

    13. Excessive curve-fitting; and
    14. Model extrapolation

    I have seen too many cases where people have forced mathematical representations on data that clearly showed more variability than than the curve-fitting approach would indicate. Generally speakeing, you should be able to get data to fit relatively simple mathematical representations if you really understand what is going on, or be able to assemble a cmplex model out of a number of simple, well-understood elements. Stock market analysis suffers greatly from this, where someobdy out there is curve-fitting every tick up and down.

    Model extrapolation is another problem as it usually means that the modeler has left the zone where data exists and is assuming that there isn’t another variable that will start to come into play at some point. Newtonian laws of motion and Einstein relativity are very good examples where Newtonian physics works great up to a point and then breaks down completely because it can’t conceive of the speed of light as a constant speed limit. I think the climate change debate suffers greatly from this as it is unclear if the many extrapolated results from current models will have any real basis as there may numerous feedback loops, positive and negative, that we are unaware of or have misunderstood their importance that can come into play.

  2. VennData says:

    Here’s an example:

    “…Even if interest rates rise further, bond prices aren’t likely to tank because most investments are in short-term debt…”

    “…Let’s consider an extreme case: Suppose the interest rate on 30-year mortgages, which is currently around 4.15 %, rose to 5.5 %in a short period of time. This would be an extraordinary, albeit not impossible, increase. This would imply a drop in the price of a newly issued 30-year mortgage of roughly 19% — a much smaller percentage decline than we saw with the collapse of either the stock or housing bubbles…”

    http://finance.fortune.cnn.com/2014/04/25/the-one-reason-why-bond-prices-wont-collapse/?section=magazines_fortune

    With an average of around 7.5%

    http://www.federalreserve.gov/releases/h15/data.htm

    When the 30-year rises to that rate it will be a 30% drop in price.

  3. Low Budget Dave says:

    Have you noticed that political news falls into at least 8 of the 12?

    Consider: “Candidate A is ahead in polls by 8 points, which means candidate B’s arguments are going to cost him the election.”

    The critical thinking skills from science apply to politics:
    – Was the poll unbiased?
    – What was the sample size?
    – Was the poll among likely voters?
    – Are the arguments wrong, or merely unpopular?
    – Did the arguments hurt B, or did B hurt the arguments?

    Have you ever heard any political coverage (or even business coverage) that did not make at least half the common mistakes?

    Worse, I tend to overlook bad science in articles where I agree with the headline. I only pick apart headlines that irk me.

    As a result of this human trait, we have become a nation impervious to science.

  4. cowboyinthejungle says:

    2. & 11. are the only significant pitfalls I’ve witnessed on the level of scientists themselves. And these are most often not nefarious, as the anti-science folk would paint them out to be.

    When you don’t know what you don’t know, misinterpretation of the data is easy to do, particularly given that we all tend to fill in the gaps in our knowledge through our own set of biases. Same goes for replicability. While good scientists make efforts to control extraneous variables and note all criitcal methodology, sometimes the details that are important to reproducability are not obvious to the experimenter or those repeating the process.

    Bottom line is that calling those 2 items bad science is not right, IMO. Moreso inherent flaws in the system, which one hopes are worked out over time and scale.