Skip to main content

15 ways to tell if that science news story is hogwash

Getty Images

Just because a study has been published in a scientific journal doesn't mean that it's perfect — there are plenty of flawed studies out there. But how can we spot them?

The excellent chart below offers "A Rough Guide to Spotting Bad Science." It was put together by the blogger behind the chemistry site Compound Interest. It isn't meant to be an exhaustive list — and not all of these flaws are necessarily fatal. But it's a great guide to what to look for when reading science news and scientific studies:

Spotting-bad-science-v2

Here's a more detailed breakdown:

1) Sensationalized headlines: Behind sensationalized headlines are often sensationalized stories. Be wary.

2) Misinterpreted results:
Sometimes the study is fine, but the press has completely messed it up. Try to stick to news sources that are particularly trustworthy.

3) Conflict of interests: Who funded the research in question? If you see a study claiming that drinking grape juice helps your memory and it's funded by the grape industry, then think about that a bit. (That happens all the time. Lots of studies on random foods being good for you, funded by random food councils.)

Be careful: some journals require researchers to reveal conflicts of interest and funding sources, but many do not. And not all conflicts of interest involve funding. For example, be a bit suspicious of someone testing a medical device who consults for free for a company that sells medical devices.

4) Correlation and causation: Just because two things are correlated doesn't mean that one caused the other. If you want to really find out if something causes something else, you have to set up a controlled experiment. (Chemical Compound's infographic brings up the fabulous example of the correlation between fewer pirates over time and increasing global temperature. It's almost certain that fewer pirates did not cause global temperatures to rise [or vice versa], but the two are still correlated.)

5) Speculative language: You can say anything with the word "could" and it could be true. Jelly beans could be the reason that the average global temperature is increasing. Unicorns might cause cancer. And pygmy marmosets may be living in the middle of black holes.

6) Small sample sizes: Did the researchers study a large enough group to know that the results aren't just a fluke? That is, did they treat cancer in two people or in 200? Or 2,000? Was that brain scanning psych study on just seven people?

7) Unrepresentative samples:
If a researcher wants to make claims about how all people think, but she only studies the college students who show up to her university lab, well, then she can only really draw conclusions about how those college students think. One cultural group can't tell you about all of humanity. This is just one example, but it's a pervasive issue.

8) No control group used: Why would anyone even waste their time doing a study like this?

9) No blind testing used: The placebo (and nocebo) effects are strong. (Check out this awesome, three-minute video on the crazy effects of placebos.)

In medical and psychology studies, participants should not be aware of whether they're in the experimental group or the comparison group (often called a "control"). Otherwise, their expectations can muddle the outcomes. And, if at all possible, the researchers who interact with the participants should also be unaware of who is in the control group. Studies should be performed under these double-blind conditions unless there is some really good reason that it cannot be done that way.

10) "Cherry-picked" results: Ignoring the results that don't support your hypothesis will not make them go away. It's possible that the worst cherry-picking happens before a study is published. There's all kinds of hidden data that the scientific community and the public will never see.

11) Unreplicable results: If one lab discovers something once, it's sort of interesting. However, that lab could have some random result or — rarely, but possibly — be filled with liars. If someone else can replicate it, then it becomes far more real.

12) Journals and citations: That something was published in a fancy scientific journal or has been cited many times by others doesn't mean that it's perfect research, or even good research.

Reading

(Getty Images)

A few more tips for evaluating science news

13) Check for peer review: Just because you saw it in a news story doesn't mean that it's been looked over by an independent group of scientists. Maybe the results were presented at a conference that doesn't review presentations. Maybe it went straight from the operating table to the press, like recent uterus transplants.

14) Results not statistically significant: Generally, researchers want to see a statistical analysis showing that the results indicate a less than 5 percent likelihood that they could have happened from chance alone (a 95% confidence interval). Some fields are even more strict than that. This is so there's a reasonable degree of certainty that you're looking at a real result, not just a stroke of good luck.

15) Confounding variables: Might something else be causing the effect that you see? Did the statistical analysis take that into account?