Social Media ‘Trust,’ ‘Distrust’ Buttons May Help Prevent Misinformation Spread, New Study Shows

-


The addition of “trust” and “distrust” buttons on social media platforms, alongside the standard “like” button, could play a vital role in reducing the spread of misinformation, according to a study conducted by researchers from University College London (UCL).

The study found that incentivizing accuracy by incorporating trust and distrust options led to a substantial decrease in the reach of false posts.

(Photo : Edar from Pixabay)

Incentivizing Trustworthiness


Professor Tali Sharot, one of the co-lead authors, highlighted the escalating problem of misinformation, or “fake news,” which has polarized the political landscape and influenced people’s beliefs on crucial matters like vaccine safety, climate change, and diversity acceptance.

Traditional approaches like flagging inaccurate content have shown limited effectiveness in combating this pervasive issue.

“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true,” Sharot said in a press release. “Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”

The researchers developed a method to encourage trustworthiness, which resulted in significantly reducing the sharing of false information. In a related study published in Cognition, Sharot and her colleagues demonstrated the power of repetition in shaping the sharing behavior on social media.

Participants were more likely to share statements they had encountered multiple times, assuming that repeated information carried greater credibility, even if it was misinformation. To test a potential solution, the researchers conducted six experiments using a simulated social media platform with 951 participants.

The platform allowed users to share news articles, half of which were inaccurate. In addition to the traditional “like” and “dislike” reactions and reposting options, some versions of the experiment introduced “trust” and “distrust” reactions.

Read Also: Maryland School District Takes Legal Action Against Social Media Companies Over Mental Health Crisis

Gravitating Towards Trust

The study found that users gravitated towards the trust and distrust buttons, surpassing the usage of like and dislike buttons. Moreover, participants started posting more accurate information in order to receive “trust” reactions.

Computational modeling revealed that after introducing trust and distrust reactions, users became more discerning about the reliability of news stories before deciding to repost them.

Interestingly, participants who engaged with the platform versions featuring trust and distrust buttons ended up with more accurate beliefs, according to the study. 

Laura Globig, a PhD student and co-lead author, emphasized the potential of incorporating buttons indicating the trustworthiness of information into existing social media platforms. She also acknowledged the complexity of real-world implementation, considering various influences.

But she noted that with the significant risks associated with online misinformation, integrating trust and distrust options could be a valuable addition to ongoing efforts in combating the spread of false information.

The study provides insights into a potential solution to tackle the challenge of misinformation, offering hope for a more reliable and trustworthy social media environment. 

By promoting accuracy and incentivizing trustworthiness, the “trust” and “distrust” buttons seemed to foster a more informed and discerning online community. The study was published in the journal eLife.

Related Article: Fake Rich Social Media Trend: Here’s Why You Should Not Fall For It

Byline

ⓒ 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.





Source link

Latest news

Another High-Profile OpenAI Researcher Departs for Meta

OpenAI researcher Jason Wei is joining Meta’s new superintelligence lab, according to multiple sources familiar with the matter.Wei...

I Tried Grok’s Built-In Anime Companion and It Called Me a Twat

An anime girl in a black corset dress sways back and forth on my screen. Its name is...

Thinking Machines Lab Raises a Record $2 Billion, Announces Cofounders

Thinking Machines Lab, an artificial intelligence company founded by top researchers who fled OpenAI, has raised a record...

A former OpenAI engineer describes what it’s really like to work there

Three weeks ago, an engineer named Calvin French-Owen, who worked on one of OpenAI’s most promising new products,...

The FBI’s Jeffrey Epstein Prison Video Had Nearly 3 Minutes Cut Out

Both analyses found that the two clips, labeled “2025-05-22 16-35-21.mp4” and “2025-05-22 21-12-48.mp4,” were stitched together. The first...

Must read

You might also likeRELATED
Recommended to you