This article was first published by Harvard Business Review. Minor edits were made in accordance with Journalist’s Resource’s editorial style.
As false news has become a global phenomenon, scholars have responded. They’ve ramped up their efforts to understand how and why bad information spreads online — and how to stop it. In the past 18 months, they’ve flooded academic journals with new research and have raised the level of urgency. In a March 2018 article, titled “The Science of Fake News,” in the prestigious journal Science, 16 high-profile academics came together to issue a call to action, urging internet and social media platforms to work with scholars to evaluate the problem and find solutions.
Much of what researchers have learned in this short time helps to answer three important questions — about how much misinformation people consume, why they believe it, and the best ways to fight it.
How far does misinformation reach?
Researchers are still trying to get a clear picture of how many people are influenced by false news and its digital reach. For now, they have been able to make estimates on the basis of survey data, geography, and other sources.
For example, a 2017 study in the Journal of Economic Perspectives examined the consumption of false news in the U.S. during the months leading up to the 2016 presidential election. In a survey of 1,208 U.S. adults, 15 percent said they remembered seeing false news stories, and 8 percent acknowledged seeing one of these stories and believing it. The study’s authors — Hunt Allcott, an associate economics professor at New York University, and Matthew Gentzkow, an economics professor at Stanford University — estimated that U.S. adults, on average, “read and remembered on the order of one or perhaps several fake news articles during the election period.”
Earlier this year, the Reuters Institute for the Study of Journalism at the University of Oxford released a report showing that false news sites appear to have a limited reach in Europe. For instance, in France, where Russians are accused of trying to interfere with the most recent presidential election, most of the false news sites studied reached 1 percent or less of the country’s online population each month in 2017. However, when researchers looked at how people interacted with false news on Facebook — via shares and comments, for example — “a handful of false news outlets in [the] sample generated more or as many interactions as established news brands.”
A 2018 paper from Andrew Guess of Princeton University, Brendan Nyhan of Dartmouth College, and Jason Reifler of the University of Exeter is among the first to offer insights about Americans who actively sought out false news before and after the 2016 presidential election. According to their study, an estimated 27.4 percent of U.S. adults — about one in four — visited an article, on a false news site, that supported either Donald Trump or Hillary Clinton. People saw an average of 5.45 articles from false news sites between October 7 and November 14, 2016.
Guess, Nyhan, and Reifler also found that false news consumption was concentrated among a small subset of Americans. Nearly six in 10 visits to false news sources were from the 10 percent of people with the most-conservative information diets.
Another key, potentially surprising, takeaway from that study: “In general, fake news consumption seems to be a complement to, rather than a substitute for, hard news — visits to fake news websites are highest among people who consume the most hard news and do not measurably decrease among the most politically knowledgeable individuals.”
That finding raises this question: Why would hard-news junkies also seek out false news? For some, it may be a matter of curiosity, their interest piqued by an alarming headline or a sensational photo. But some people believe the information they find on false news sites, even when it’s not backed by established facts or scientific evidence.
Why do people believe false information?
Scholars have known for decades that people tend to search for and believe information that confirms what they already think is true. The new elements are social media and the global networks of friends who use it. People let their guard down on online platforms such as Facebook and Twitter, where friends, family members, and coworkers share photos, gossip, and a wide variety of other information. That’s one reason why people may fall for false news, as S. Shyam Sundar, a Pennsylvania State University communication professor, explains in The Conversation. Another reason: People are less skeptical of information they encounter on platforms they have personalized — through friend requests and “liked” pages, for instance — to reflect their interests and identity.
Sundar characterizes his research findings in this way: “We discovered that participants who had customized their news portal were less likely to scrutinize the fake news and more likely to believe it.”
A growing body of research also indicates that repeated exposure to false statements can lead people to believe those falsehoods. An experimental study, led by Vanderbilt University assistant professor of psychology Lisa Fazio, showed that people sometimes are more likely to believe repeated untrue facts than even their own knowledge about a topic. For example, even after study participants had answered correctly that the short pleated skirt worn by Scots is called a kilt, their chances of believing the false statement “A sari is the name of the short pleated skirt worn by Scots” increased after they read that sentence multiple times.
Similarly, a study forthcoming in the Journal of Experimental Psychology: General shows that readers who had been exposed to a made-up headline were more likely to believe it was true when they saw it again. Even when a headline is presented with a warning that the facts it conveys are in dispute, readers continue to believe the headline if they’ve had prior exposure to it.
If people believe false news to be true, they might freely share it. That’s why researchers are investigating methods to prevent its dissemination.
How can the spread of false information be stopped?
In some parts of the U.S., officials are promoting news literacy programs as a way to help Americans better assess the quality of online content. It’s too soon to tell, though, whether that could change bad habits over the long term.
Research offers a mixed view of the effectiveness of fact-checking or trying to correct bad information as a remedy.
Nyhan, of Dartmouth, and Reifler, of the University of Exeter, found that correcting information can backfire. In their well-cited 2010 study, “When Corrections Fail: The Persistence of Political Misperceptions,” they found that people sometimes hold more firmly to false beliefs when confronted with factual information. For example, when political conservatives were presented with correct information about the absence of weapons of mass destruction in Iraq, they were even more likely to believe Iraq had those weapons.
Some new research, however, seems to partially contradict those findings. A study forthcoming in Political Behavior, from Ohio State University’s Thomas Wood and George Washington University’s Ethan Porter, suggests the “backfire effect” is uncommon. “Overwhelmingly, when presented with factual information that corrects politicians — even when the politician is an ally — the average subject accedes to the correction and distances himself from the inaccurate claim.”
In 2015 Nyhan and Reifler teamed up again on a study that looked at attempts to correct misperceptions about the flu vaccine. A nationally representative survey experiment found that explaining that the vaccine does not give humans the flu helped to clear up misconceptions about the vaccine and its safety. But imparting this new information had consequences, too: Among the study participants most worried about vaccine side effects, the probability of their saying they were likely to get the vaccine fell from 46 percent to 28 percent.
Those results are consistent with findings from prior research on efforts to correct myths about the measles-mumps-rubella, or MMR, vaccine. “Corrective information reduced beliefs that the MMR vaccine causes autism but still decreases intent to vaccinate among parents with the least favorable vaccine attitudes,” write Nyhan and Reifler, who authored that study as well.
But there’s also fresh evidence that the same technology that helps to spread false information can be used as a tool to contain it. A recent article in Political Communication suggests social network relationships may stem the flow of bad information — at least on Twitter. The study, led by Drew B. Margolin of Cornell University, showed that Twitter users who made false statements were more likely to accept corrections from friends and individuals who followed them.
What don’t we know?
While the new focus on false news has generated a lot of new scholarship, plenty about the phenomenon remains a mystery.
For one, much of the new research centers on U.S. politics and, specifically, elections. But social networks drive conversations about many other topics such as business, education, health, and personal relationships. To battle bad online information, it would be helpful to know whether people respond to these sorts of topics differently than they respond to information about political candidates and elections. It also would be useful to know whether myths about certain subjects — for instance, a business product or education trend — are trickier to correct than others.
As the 16 academics pointed out in “The Science of Fake News,” there’s a need for a more interdisciplinary approach to the problem.
The group, which includes social scientists and legal scholars, also stresses the need to learn more about platform-based detection and interventions. In their call to action, they urge leaders at Google, Facebook, and other online platforms to help researchers understand how those platforms filter information. “There are challenges to scientific collaboration from the perspectives of industry and academia,” they write. “Yet, there is an ethical and social responsibility, transcending market forces, for the platforms to contribute what data they uniquely can to a science of fake news.”
About the author: Denise-Marie Ordway is the managing editor of Journalist’s Resource, a project of Harvard’s Shorenstein Center on Media, Politics and Public Policy aimed at bridging the gap between journalism and academia. Its primary goal is to help journalists improve their work by relying more often on scientific evidence and high-quality, peer-reviewed research.