This content is sponsored by Philip Morris International

Sponsored by Philip Morris International

This content was produced by Boston Globe Media's Studio/B and paid for by the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.

Down the rabbit hole: How social media fuels misinformation

Your social media feed will keep showing you more of what you like — but that can be a big problem.

Social media algorithms are neutral in that, from the standpoint of accuracy, they’re not designed to help or to harm. They’re only built to maximize readership and, in turn, ad revenue. So how did social media become such a perfect megaphone for misinformation, disinformation, and outright lies?

Networks are built for engagement and profit

The ad-driven model of social media networks profit off of user engagement. The longer users spend on a platform, the more often they look at content, including ads — and the more money that platform makes. So networks such as Facebook, Twitter, and YouTube are designed to maximize users’ time spent on them by creating feedback loops of personal engagement, reinforcing contact and content preferences.

advertisement

Behind the algorithms that feed your feeds

Two main algorithms govern the flow of information on social media platforms. The first is the connections algorithm, known on Facebook as “people you may know.” This defines who connects to whom, and therefore who will see each other’s posts. The second is the feed algorithm. This selects what users see and in what order. Together these two algorithms define what a given user sees on the network. The connection algorithm builds the network by establishing connections, and the feed algorithm moves data between those connections based on what each user is most likely to engage with and return to.

When YouTube recommends a channel or creator you might like, that’s the connection algorithm at work. When Twitter populates your feed with posts from the other Twitter users you’ve chosen to follow, that’s the feed algorithm in action.

As users engage with and post certain content, the network automatically recommends more of the same to them. It also begins recommending connections who post similar content or have similar connections.

Algorithms drive users down a rabbit hole

Nick Loui, founder, Peak Metrics

While this creates a good system for maximizing views, it also creates ideal conditions for breeding misinformation. In the words of Nick Loui, founder of Peak Metrics, a data analysis company that uses AI to help organizations spot and stop misinformation, this system can quickly take users down a rabbit hole.

Based on reader actions, networks will reinforce the same material over and over again, driving users toward similar content the longer they browse regardless of whether that content contains misinformation, bad data, or fake news. This can create a feedback loop of false information that grows exponentially.

“You start in one place and all of a sudden it’s like a concentric circle and you just keep going and going and going, and it’s very good at that,” says Loui.

As the Brookings Institute notes in a study of user behavior on YouTube, “[e]ven when they are not personalized, recommendation algorithms can still learn to promote radical and extremist content.” They found that YouTube’s algorithms prioritized channels and videos that had high engagement, regardless of the content. That meant, for example, that YouTube would often recommend offensive material or conspiracy theories to someone who had watched political videos, which, as the Brookings Institute noted, not just led people to view it but “contributed to the normalization of radical content.”

Many social media platforms work much like this; the network emphasizes similarity — showing users content alike to what they’ve seen before — and novelty, showing users content with which they are unlikely to get bored.

advertisement

Can the problem also be the solution?      

Adam Wilson, CEO, Trifacta

While data technology creates this problem, it can also provide solutions, says Adam Wilson, CEO of the data analytics firm Trifacta. The firm specializes in helping companies and governments identify bad data in massive environments, such as supply chain or clinical trial data. In part, he says, the solution is about looking for patterns.

So, while the algorithms behind social media networks help to push users toward false or radical content, they can also be taught to identify that misinformation. Often, he says, this means that the network should start by trying to find consistent sources of misinformation so that it can sort content by the good and the bad.

This is different from the current model that many social media networks have adopted, which often involves having human moderators specifically try to identify harmful content or users on a case-by-case basis.

“It becomes like trying to find a needle in a haystack,” says Wilson, “but what if those haystacks are coming at you in real time? A human being can’t keep up with that. What you need is to get to a point where the anomalous data starts to surface itself so that someone can keep up with those exceptions.” This means creating a system that identifies bad data on its own, often based on similar information or similar sources from the past.

Cuihua Shen, professor of communication, University of California Davis

Training algorithms to look for better sources of data can particularly help. Cuihua Shen, a professor of communication at the University of California Davis, studies the relationship between readers and fake news. Her research particularly emphasizes image manipulation and deep fakes, the AI-generated videos that can be used to disseminate false information.

What she has found is that users first rely on external cues to let them know whether something is true or not. When someone is trying to figure out if an image or video is fake, Shen says, “first they point to non-image cues. So if someone is trying to decide if it’s fake, they might say it’s fake because it’s published on Facebook… Or ‘This is fake because I’ve never heard of this account.’ Or ‘This is real because the mockup shows that this is from the New York Times website.’”

This is what Shen calls heuristics — it’s the process through which someone makes a decision, often using a “mental shortcut” for making quick judgments. Researchers agree that helping social media networks address these heuristics can disrupt the feedback loops that prioritize the misinformation that users have chosen for themselves — and, ultimately, help stop the spread of misinformation.

Back to series homepage

This content was produced by Boston Globe Media's Studio/B and paid for by the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.