Your Thoughts Matter

Covering health research? Choose your studies (and words) wisely

(Unsplash)

Many of the most popular news stories about health research include overstated findings or substantial inaccuracies, according to a study led by Noah Haber, a postdoctoral researcher at the Carolina Population Center at the University of North Carolina at Chapel Hill.

Haber and his colleagues had a panel of 21 reviewers, all of whom had master’s degrees, and the majority of whom were enrolled in or had completed a doctoral degree, look at the 64 most-shared news articles about health research on Facebook and Twitter in 2015, and the 50 studies that spawned these stories.

The reviewers focused on the methodology and language used in the research papers. They were interested in whether the studies could plausibly claim a “causal” connection – that the topic of interest caused or contributed to the health outcomes studied. The reviewers looked at how the authors described their results. Did they say or imply that one variable was a direct result of another variable when, in actuality, the authors had found only a correlation – a relationship of some sort?

Haber and his colleagues found widespread problems with the language used to describe the research findings, both in the studies and the resultant media coverage. Twenty percent of academic papers strongly implied causal results; 34 percent used language that was stronger than what the reviewers deemed appropriate.

Next, the reviewers evaluated the associated most-shared news articles to see whether they accurately reported the main features of the study. They also compared the language used in the articles to the language used in the papers, focusing on potential overstatements of causality.

In the news, 48 percent of articles overstated findings and 58 percent had one or more substantial inaccuracies such as misreporting the question.

“It’s really basically impossible to write about these particular studies in a way that does not mislead,” Haber said in a phone interview with Journalist’s Resource.

“I think that the thing that is most in [journalists’] power to do is the choice of what to write about and what not to write about it,” he said. “Stop writing about [academic] articles that are meaningless.”

“The real problem is when you’re using association as a euphemism for causality, and not technically using the word. That’s where the issue lies,” Haber added. These studies might avoid causal language, but, through their design, dog whistle a causal connection where there might not be one.

And even when academic researchers and reporters are careful with the language they use, technically accurate terminology can suggest inaccurate conclusions to the lay reader.

“Taking the place of your viewer, you have to understand how your viewer is going to interpret that information … And if what they absorb is not what the article is actually saying, then that’s sort of a misinformative way of reporting on an article, even if it’s the right technical language,” Haber said.

Haber believes there are a few clues journalists can pick up on to determine whether a study merits coverage:

  • It’s hard to explain the relationship between lifestyle-related, direct exposures and long-term outcomes – for example, drinking wine with dinner and the risk of developing heart disease later in life. “Anything that’s like chocolate, red wine, everybody’s favorite, and then, like, cancer mortality and cardiovascular disease … you really can’t disentangle these sorts of effects … and that’s, unfortunately, a lot of the stuff that people are really interested in.”
  • Studies that control for many confounding variables (other factors that could explain associations between topics of interest) might presuppose a causal relationship. In other words, controlling for many variables can hint that the researchers are interested in one specific effect to the exclusion of alternate explanations. “There is nothing wrong with controlling for confounding variables in and of itself,” Haber explained in an email. “However, when studies control for a LOT of things, it often indicates that the statistical strategy itself is weak and doomed to failure.”

If you must write about the latest study linking a favorite food or beverage to a surprising outcome, Haber suggested approaching it as a critical review. “There aren’t that many pieces that really do a critical review of studies as they come out,” he said. “They sort of assume that this new study finds the truth, whereas an alternative and pretty interesting article can be: this new study comes out, here is why it might be misleading, or here are some critiques of this study in this body of literature. And I think that actually is not only often more accurate and better information for lay audiences, it’s probably also click-y. A lot of this stuff can be quite interesting to share around.”

And, with these tips in mind, Haber said not to be too wary of research that attempts to find an association between two or more things. “Just because a work is associational does not necessarily mean that it is meaningless,” he said.

“You have to figure out what the assumed purpose [of a study] is. If the assumed purpose is [determining] causality, then a lot of the stuff is inappropriate to write about. But there are a lot of examples of interesting associations that are really useful by themselves, and are interesting and doing a lot of interesting stuff. One example is disparities research. It’s usually relatively simple associations, whether an association between wealth and things, which can be interesting by itself, because it’s useful for targeting.”

For journalists writing about a study, Haber suggested talking to one of the study’s authors: “If you think there’s a possibility that causality could apply here, ask the author directly, ‘Is this causality?’ Most of the time, they’ll tell you no. And then ask them, if no, [for] a person reading this article, what should this change about their lives? I’m actually really curious, if most academic authors were brought this question directly, I’m curious what a lot of people would say.”

He also thinks it’s a good idea to get their feedback on the accuracy of what you have written: “Going to the authors in the studies is a necessary condition, that is a thing you should definitely do pretty much all the time, and allowing them to critique the interpretation of what is written, and particularly focusing on what’s shareable and tweetable  … is really important.”

If you’re balking at the idea of handing over your article (or parts of it) for review by an academic, Haber noted that peer reviewers of the study and methodological specialists are also good resources to check your work for accuracy.

However, Haber noted that not all third parties share the same level of expertise. He noted that certain sources are often repeatedly interviewed in coverage of various, broad research topics, and that they might not necessarily have the specific expertise or credibility to provide insight on a specific research study.

He also cautioned against taking a single study as the be-all, end-all: “We tend to have this idea in society that one study by itself — there are these great eureka moments — but really this stuff is built by consensus over many, many years. It’s much harder to gather the information about what the scientific consensus is on things versus one study … consensus, it’s a thing that’s a little bit more abstract.”

 

Haber pointed to Health News Review and FiveThirtyEight and The Incidental Economist as online news sites that avoid many common pitfalls of reporting on health research. Health News Review even has criteria by which they evaluate news write-ups of academic research, which reporters might find useful.

Citation
Citation: Haber, Noah; et al. "Causal Language and Strength of Inference in Academic and Media Articles Shared in Social Media (CLAIMS): A Systematic Review," PLoS One, 2018. doi: 10.1371/journal.pone.0196346.
https://journalistsresource.org/wp-content/uploads/2018/08/headway-537308-unsplash-e1534777295208.jpg