Your Thoughts Matter

How journalists can spot bias in randomized clinical trials

(Pixabay)

Randomized, controlled clinical trials are studies in which a new intervention, such as a medical device, is randomly assigned to some participants and tested against a control group, which receives a standard treatment or a placebo to determine its effects. They often are considered the gold standard of medical studies because they can provide evidence of causation.

Even so, these trials can be designed such that the deck is stacked in favor of the new treatment, so journalists can’t assume a study is reliable just because it’s randomized, according to Dr. Joanna Chikwe, professor and chair of cardiac surgery at the Smidt Heart Institute at Cedars-Sinai Medical Center in Los Angeles.

There’s a financial incentive for health care industry players to bring their expensive-to-create-and-test devices to market, Chikwe told journalists gathered recently for a fellowship on cardiovascular health, “Covering the Heart Beat,” organized by the National Press Foundation.

The cost of hospital cardiac care has jumped from $200 billion in 2015 to around $250 billion today, and is projected to increase to $400 billion by 2035, Chikwe said, citing a March 2019 paper in Circulation.

“Innovation is one of the biggest cost drivers,” she said. “It will take you, on average, between $7 [million] and $19 million just to go from concept to [a medical device] that’s ready to put into a person. And that’s the cheap bit … one randomized trial is required for FDA [U.S. Food & Drug Administration] approval, and that will cost you somewhere between $31 [million] and $95 million.”

During her talk, which focused on medical device development, Chikwe explained how the cost of industry innovation can influence the results of clinical trials. Chikwe also highlighted a few strategies that trials might employ in hopes of securing favorable results — and questions journalists can ask to help them detect when these strategies are being used.

Tips for detecting bias

In the pre-study phase, trials can build in bias toward one treatment through its design. With that in mind, journalists might want to ask questions about which endpoints, or outcomes, the researchers have chosen to study.

Chikwe noted that a study can be designed to deliberately avoid endpoints that indicate a new device is inferior. For example, let’s say a manufacturer has created a new device that’s inserted through a minimally invasive procedure. Compare this with an invasive surgical treatment that’s the current standard. If the endpoints the researchers select are hospitalization after treatment or blood transfusions, it will be difficult to truly gauge effectiveness, especially considering both hospitalizations and blood transfusions are more likely to occur after surgery. What the researchers should test is whether the new device prolongs a patient’s life or improves their condition.

Another way to bias the results of a study is to choose a “non-inferiority” design. In a non-inferiority study, as long as outcomes associated with the new intervention aren’t inferior to the comparison treatment, the results can be seen as favorable for the new treatment. If there’s a wide margin for determining whether or not a treatment is inferior, the new intervention could actually have worse results but still be technically considered “non-inferior.”

For some studies, the bias might be in the pool of patients selected, Chikwe said. For example, does the study test an intervention using relatively healthy patients, compared with patients who have a much more severe disease?

Studies might be designed with a short follow-up period, Chikwe said. That way, researchers can get their results out sooner, and the results might be biased in the short-term in favor of the new treatment. For example, Chikwe said that early results almost never favor heart surgery over a non-surgical intervention, because heart surgery tends to offer survival benefits only in the longer term.

In the execution and analysis phase of a study, a number of factors can bias results:

  • Physicians might break the randomization of the study, Chikwe said. A patient might be randomly assigned to receive the new treatment, but physicians sometimes deem a new treatment too risky or otherwise unsuitable for a particular patient. In that case, the results are based on an “as-treated analysis;” they allow researchers to compare how the patients fared as they were treated, but not necessarily as they were supposed to be treated had treatments been randomly assigned. As-treated analyses can be used to seek FDA approval for new medical treatments.
  • Researchers might change the study design while it is underway — perhaps because preliminary results are unfavorable, or because of feasibility concerns.
  • In the analysis phase, individuals running a study might analyze only a subset of the recorded outcomes to demonstrate a benefit or avoid showing harms.

After the study is completed, Chikwe said, there are yet more opportunities for bias to creep in:

  • When readying a study for publication, authors may choose to report some outcomes but not others.
  • They could underreport negative results, or spin neutral or negative results favorably.
  • Chikwe suggested that conflicts of interest at high-impact medical journals might lead them to publish methodologically flawed trials. A 2010 study of conflicts of interest at medical journals found that pharmaceutical companies’ purchase of reprints accounts for a large share of journal revenue. If the companies’ trials aren’t published in medical journals, they might have less reason to purchase reprints, which would hurt journal revenue, Chikwe said.

What’s a journalist to make of all this — especially one who doesn’t have the time or knowledge to spot all the pitfalls of a study? Chikwe’s main tip is to enlist the opinion of a third-party expert in clinical trials or study design to evaluate claims made in favor of a new medical device.

“I spend a huge amount of time thinking about this kind of research and reading it,” she said. “And I can read a paper and pick up maybe a third of the issues with it, because some of them are so subtle, but so important. You really need to work with somebody whose whole interest is that … Have a relationship with somebody who is a trialist, somebody who’s got genuine interest in and knowledge of methodology.”

 

While you’re here, check out our tip sheet that dispels five widespread myths about women’s heart health and our summary of new research on the cardiovascular risks of marijuana use.