This content is sponsored by Philip Morris International

Sponsored by Philip Morris International

This content was produced by Boston Globe Media's Studio/B and paid for by the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.

Who can be held responsible for stopping misinformation?

Who should be held responsible for stopping misinformation — the sharer, the platform, or the reader? Learn how each can play a role.

In 1998 The Lancet published a study. The paper, written by British physician Andrew Wakefield, claimed to establish a clear link between the measles-mumps-rubella (MMR) vaccine and childhood autism. The medical community almost immediately criticized Wakefield’s findings. In the 23 years since, pediatricians and specialists have demonstrated the numerous flaws in his study that led to mistaken results.

The damage, however, was done. Some parents, fearful for their children’s safety, began to reject early vaccinations and childhood diseases such as measles began making a comeback.

Wakefield’s claim that vaccines cause autism would, arguably, become one of the first pieces of modern misinformation.

Since then, researchers and policymakers alike have begun to recognize misinformation as a form of large-scale public crisis.

Correcting misinformation will not come easily. But that doesn’t mean it’s impossible. Researchers have found ways that they say can help get a handle on this problem, by addressing it from three perspectives: The platforms that publish false stories; the readers who embrace those narratives; and the users who share them.

advertisement

Platforms: Labels can’t cure everything, but they help

People tend to scan and move on, sharing the things that seem fun or interesting. They see social media as entertainment and use it that way. This also suggests a solution.

Research has found that content labels on social media sites can be somewhat effective at reducing the volume of misinformation on social media platforms such as Facebook, Twitter, and YouTube. Most labels that simply raise the issue of accuracy, truth, or reliability will disrupt the flow state of social media users, prompting them to think about whether a post is necessarily true rather than just entertaining.

Prompts won’t convince all readers to change their minds. While many users on social media share misinformation regardless of whether they believe it or not, the Pew Research Center has found that tens of millions of Americans believe untruths as a result of using social media. These users are unlikely to reevaluate what they believe because of accuracy tags on social media posts. However, accuracy prompts can shift many users to a more reflective state of mind and help reduce the volume of misinformation online.

advertisement

Readers: Beware of stories that confirm their own biases and beliefs

When the Reuters Institute looked at what thrives online, they found that almost 90% of the misinformation that people actually spread and share “involves various forms of reconfiguration, where existing and often true information is spun, twisted, recontextualised, or reworked.” Much less misinformation online has been made up out of thin air using provably false facts or “deepfaked” images.

David Mikkelson, co-founder, Snopes

This kind of misleading, reconfigured misinformation isn’t just more common, says David Mikkelson, co-founder of the fact-checking website Snopes. It is far more difficult to recognize and correct.

To avoid being fooled, readers should look for sources that are critical of their beliefs. This, as psychologists Caroll Tavris and Elliot Aronson found in their book “Mistakes Were Made (But Not By Me), can help prevent the natural instinct that people have to reinforce their own beliefs in the face of virtually all evidence.

Self-criticism is arguably the best thing that readers can do to help on their own end. At the same time, when trying to engage with misinformation in family or friends, readers should remember that people believe in misinformation for emotional, personal reasons. This is why they feel attacked and dig in further when presented with evidence contrary to their beliefs.

Instead of challenging a friend’s beliefs, readers should ask questions. Honest, open inquiry about what someone believes and why can cause someone to enter a more reflective mode. Research by Tavris and Aronson, among others in the field, has found that when people have cause to consider their beliefs (instead of defending those beliefs) they are more likely to consider the basis of those beliefs. Asking someone why they believe a story, in other words, gives someone a chance to think about that question themselves.

In layman’s terms, the best way readers can address misinformation in their own mind is to ask “why do I believe this?” The best way they can address misinformation in family and friends is to ask “why do you believe this?”

In both cases, readers should listen honestly to the answer.

advertisement

Sources: Change the economics of sharing misinformation

Marshall Van Alstyne, professor, Boston University’s Questrom School of Business

If talking consumers out of their misinformation doesn’t work, Marshall Van Alstyne, a professor at Boston University’s Questrom School of Business who studies information economics, suggests another option: Shutting down the source.

“The responsibility should be on the person who publishes the problem,” he says. “If you have a factory that’s polluting you want the factory to stop polluting, not have the citizens all wear gas masks.”

The problem is that it is always cheaper, easier, and faster to produce fiction than fact, Van Alstyne says. Fact-checkers, in other words, can’t keep up with the pace of fiction.

Economists call this the problem of “externalities,” behavior free for the actor but costly for everyone else. They tend to solve this problem by shifting the burden of costs, changing systems so that the party that causes harm also has to pay for it.

For example, pouring waste into the water and air costs a polluter nothing while it imposes medical bills and cleanup costs on a whole society. Environmental regulations shift those costs by making it expensive to dump toxic waste so that polluters pay for their actions instead of everyone else.

Misinformation has the same dynamic. It costs nothing to produce and imposes costs on society in the form of fact-checking, poor judgment, and more. As a result, Van Alstyne suggests having social media networks impose a similar burden-shifting scheme.

Instead of making readers slow down, Van Alstyne recommends burdening the people who share misinformation. If the social media network determines that a piece of content is false or inaccurate, users who share it should have their future messages delayed. The more they share misinformation, the more their messages get delayed until it can take them a week or a month to share a single post. Infractions might also cut a user’s social network down, costing them connections (followers and friends) as they share false information.

“What does this do?” Van Alstyne says. “It changes the incentive of the ideologue. If your goal is to persuade, you’re going to cut your own influence because you’re going to cut your social network in half and delay your messages.”

Whether it’s the right solution, the principle remains: the incentives of misinformation need to change. It needs to become harder to tell lies, not just harder to read them.

Back to series homepage

This content was produced by Boston Globe Media's Studio/B and paid for by the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.