Monday, April 07, 2014

(Mis)Understanding Studies

Nate Silver and his merry band of data journalists recently re-launched fivethirtyeight.com, a fantastic site that tries to communicate original analyses of data relating to science, politics, health, lifestyle, the Oscars, sports, and pretty much everything else. It's unsurprising that articles on the site receive a fair amount of criticism. In reading the comments on the articles, I was heartened to see people debate the proper way to explain the purpose of a t-test (we're a long way from the typical YouTube comments section), but a bit saddened that the tone of the comments made them seem more like carping and less like constructive criticism. Instead of saying someone is "dead wrong", why not make a suggestion as to how their work might be improved?

One article on the site got me thinking about a topic I've already been thinking about as I begin teaching classes on news literacy and information literacy: how news articles about research misrepresent findings and what to do about this phenomenon. The 538 piece is wonderfully specific and constructive about what to do. It provides a checklist that readers can quickly apply to the abstract of a scientific article, and advises readers to take into account this information, along with their initial gut reaction to the claims, when deciding whether or not to believe the claims, act on them, or share the information. It applies to health news articles in the popular press, but I think it could be applied to articles about media effects.

Now, the list might not be exhaustive, and there might be totally valid findings that don't possess any of the criteria on the list, but I think this is a good start. And really, that's what I love about 538. I recognize it has flaws, but it is a much needed step away from groundless speculations based on anecdotes that are geared toward confirming the biases of their niche audience (i.e., lots of news articles and commentary). And they appear to be open to criticism. Through that, I hope, they will refine their pieces to develop something that will really help improve the information literacy of the public.

The piece got me thinking about the systematic nature of the ways in which the popular press misleads the public about scientific findings. They tend to follow a particular script: The researchers account for most likely contributors to an outcome in their studies and test these hypotheses in a more-or-less rigorous fashion. The popular press does not mention the fact that they accounted for certain possible contributing factors because of limited space and the need to attract a large, general audience. When people read the news article about the research study, they think "well, there's clearly another explanation for the finding!" But in most (not all, but most) cases, researchers have already accounted for whatever variable you imagine is affecting the outcome.

In other cases, the popular press simply overstates either the certainty that we should have about a finding or the magnitude of the effect of one thing on another thing. Again, if we look at a few things from the original research article (like the abstract and the discussion section), we should be able to know whether or not the popular press article was being misleading, and we wouldn't even have to know any stats to do this. 

The popular press benefits from articles and headlines that catch our eyes and confirm our biases. That's just the nature of the beast. Instead of just throwing out the abundant information around us, it's worth developing a system for quickly vetting it, and taking what we can from it. 

No comments: