` `

Trustnet Browser Extension Empowers Users to Assess Content and Combat Misinformation Online

Wesam Abo Marq Wesam Abo Marq
28th May 2024
Trustnet Browser Extension Empowers Users to Assess Content and Combat Misinformation Online
Similar extensions can limit the spread of misinformation (Getty)

The prevalence of online misinformation is widely acknowledged as a significant problem, yet consensus on addressing it remains elusive. Various suggested remedies center on the role of social media platforms in moderating user-generated content to curb misinformation. A new solution comes in the form of the Trustnet browser extension, which empowers individuals to evaluate the accuracy of content across websites, offering users a means to combat online misinformation.

What Is the Trustnet Browser Extension? 

According to Farnaz Jahanbakhsh SM ’21, PhD ’23, currently a postdoc at Stanford University, this approach entrusts a critical societal decision to profit-driven entities, curbing users' autonomy in determining whom they trust. Moreover, relying solely on platforms fails to address misinformation encountered from other online outlets.

"This approach puts a critical social decision in the hands of for-profit companies. It limits the ability of users to decide who they trust. And having platforms in charge does nothing to combat misinformation users come across from other online sources,” Farnaz said.

The extension Empowers Users to Assess Online Content

Jahanbakhsh, alongside Massachusetts Institute of Technology Professor David Karger, has put forward an innovative approach to tackle online misinformation. Their approach, the Trustnet browser extension, enables users to flag misinformation and appoint trusted evaluators, shifting the authority of content assessment from centralized entities to individual users. Notably, this versatile extension functions across various online platforms, including social media, news aggregators, and streaming services. 

Their research, detailed in a paper presented at the ACM Conference on Human Factors in Computing Systems, underscores the effectiveness of this decentralized approach, demonstrating that even untrained individuals can utilize the tool to evaluate misinformation effectively. Karger emphasizes the urgency of relying on verified information in the face of rampant misinformation proliferation, highlighting Trustnet as a promising vision for the future of online information verification.

A supporting image within the article body
A screenshot of the paper presented at the ACM Conference on Human Factors in Computing Systems.

The Trustnet Browser Extension Methodology

In their ongoing efforts to combat online misinformation, the researchers expanded on their previous work by developing the Trustnet social media platform, allowing users to evaluate content accuracy and designate trusted assessors. Users turned to a platform-agnostic solution: the Trustnet browser extension.

A supporting image within the article body
A screenshot of the MIT News’ report.

The extension enables users to assess content with a simple click, providing options to label it as accurate, inaccurate, or question its accuracy, along with an opportunity to explain their rationale. Users can identify trusted assessors whose evaluations they wish to see, and the extension automatically displays these assessments when visiting relevant websites. In addition, users can choose to follow others beyond their trusted assessors and respond to accuracy inquiries through the side panel.

Addressing the prevalence of content within social media feeds and aggregator pages, where users often refrain from clicking links, the researchers devised a solution wherein the extension checks all links on the current page for assessments by trusted sources. Indicators are placed next to linked content with assessments, while links to content deemed inaccurate are faded.

Differing Assessments by Trained and Untrained Users

In a two-week study, researchers observed how 32 individuals used the Trustnet Extension and were tasked with assessing two daily content pieces. Surprisingly, users often assessed content differently from that typically evaluated by professionals, favoring topics like home improvement or celebrity gossip over news articles. They expressed a desire for assessments from non-professionals, such as doctors for medical content or immigrants for foreign affairs.

Farnaz Jahanbakhsh notes this misalignment between users' needs and content, advocating for a scalable decentralized approach to accommodate diverse assessments. However, the researchers caution against potential echo chambers if users solely trust like-minded assessors.

To address this, structured trust relationships, such as suggesting users follow trusted assessors like the FDA, could mitigate bias. Jahanbakhsh aims to explore structured trust relationships further, expanding the framework's application beyond misinformation to filter content unsympathetic to protected groups.

She stresses the importance of empowering users with fact-checking tools while allowing them to choose the content they want to see.

Read More

Threads Struggles With Misinformation Moderation Within a Month

Exploring the Vasa-1 AI Tool’s Role in Detecting Deepfakes in Digital Videos