` `

Do Tech Giants Obstruct Fact-Checking Efforts to Combat Misinformation?

Misbar's Editorial Team Misbar's Editorial Team
Technology
30th June 2024
Do Tech Giants Obstruct Fact-Checking Efforts to Combat Misinformation?
Social media are removing helpful features for fact-checkers (Getty)

AI-generated videos, often random compilations of internet clips containing "tips" and "rare facts" with little to no scientific basis, are going viral on platforms like TikTok and Instagram. These videos can appear to any user, even if they have not previously viewed such content. Meanwhile, fake reviews and AI-generated comments are widespread in online stores, influencing bestseller lists on Amazon. Additionally, misleading medical channels on YouTube promote unproven treatments for cancer and other diseases.

This problem extends beyond specific online content. The European parliamentary elections earlier this month were marred by unprecedented disinformation and influence campaigns, similar to those seen in the U.S. presidential elections. Even this year's Hajj season was not spared, with misinformation spreading widely. Google's AI Overview tool, designed to provide AI-generated answers through its search engine, has even suggested absurd advice, such as "putting glue on pizza to keep the cheese in place."

Does this mean that misinformation is more widespread online than ever before? To address this question systematically, researchers and fact-checkers usually rely on data for monitoring and analysis to yield precise results. However, in this case, the crucial data is locked behind the doors of the companies that operate social media platforms and other media outlets where misinformation and conspiracy theories thrive. Today, evaluating the reach and spread of false and misleading information is a complex, indirect process with less-than-ideal results.

But are companies like Google, Meta, and TikTok indifferent to this issue? The answer is complex. However, recent measures and initiatives taken by these companies may help address the problem.

The Disappearance of Misinformation Measurement Methods

One of the most crucial tasks for journalists covering the spread of misinformation online is finding ways to measure its reach. There is a significant difference in impact between a YouTube video with 1,000 views and one with 16 million views. Recently, however, key metrics used to measure the reach of false and misleading information that goes "viral" have started to disappear from the public domain.

For instance, TikTok disabled the view count feature for popular hashtags earlier this year, instead showing the number of posts using those hashtags. Meta announced that it will shut down CrowdTangle in August, a widely popular tool among researchers and journalists for closely examining how information spreads on social media platforms.

CrowdTangle

Meanwhile, Elon Musk, after social media users noticed controversial likes from his personal account, decided to make likes private on platform X. While this move enhances privacy for ordinary X users, it diminishes accountability and complicates efforts to monitor and analyze the activities of bots and troll farms that amplify influence campaigns on the platform.

With restricted access to platform APIs essential for certain monitoring tools, researchers’ ability to track and regulate online activities has become increasingly constrained.

The disappearance of these metrics and indicators runs counter to the recommendations of many experts on false and misleading information. Transparency and disclosure are crucial elements in reform and anti-disinformation efforts led by regulatory bodies and legal entities, as exemplified by the European Union's Digital Services Act.

While some responsibility falls within media literacy efforts and raising awareness about how to consume content and news, platforms and their owning companies must also shoulder part of this responsibility. For example, promoting AI-based chatbots as advanced search tools, as OpenAI did with the launch of ChatGPT, raises user expectations beyond what the service itself can fulfill.

How Serious Are Platforms’ Efforts to Combat Misinformation?

The current algorithms for content distribution not only amplify misinformation but also enable platform-owning companies to profit from it. This can occur through ad sales and promotional campaigns organized by publishers of fake news on Meta's platforms or through purchases on the misinformation-filled 'TikTok Shop.' If these companies were to take concrete steps to alter how misinformation spreads on their platforms, they might work against their own commercial interests.

Social media platforms are designed to display content that users are interested in interacting with and sharing. Similarly, AI-powered chatbots are designed to give the illusion of knowledge and research. However, neither of these models is ideal for assessing credibility, often necessitating a narrowing of the platform’s intended scope. Slowing down or restricting such platforms means reduced interaction, leading to slower growth and, consequently, lower revenue.

Many platforms initiated efforts to combat misinformation following the 2016 U.S. elections and again at the start of the COVID-19 pandemic. However, since then, we have seen a kind of retreat in these efforts. For instance, Meta laid off employees from content moderation teams during the 2023 mass layoffs in Silicon Valley and relaxed the "COVID-era" rules.

Without sufficient data for effective measurement, evaluating the genuine efforts of major platforms to combat misinformation remains challenging. Based on available information, it seems these platforms are prioritizing combating false and misleading information less, indicating a sense of weariness on the issue. This does not imply that no one is taking action, however.

Meta laid off employees from content moderation teams during the 2023 mass layoffs in Silicon Valley

The process of “pre-publication review” or “vetting,” where platforms collaborate with fact-checking organizations to verify the accuracy of rumors and falsehoods before they gain more traction, has proven effective but requires broader adoption.

Furthermore, fact-checking through collective user contributions, like "Community Notes" on platform X, has been effective, alongside watermarks on AI-generated or government-tracked content. It is commendable that platforms consistently update their "community guidelines" in response to emerging issues.

There is no doubt about the challenge of combating misinformation, given the diversity of tools and methods that facilitate its dissemination. However, the reluctance of social media platform owners and operators to prioritize addressing this issue does not prevent misinformation from reaching the public. While these companies evaluate their commitment to controlling and addressing their platforms' capacity to spread and promote falsehoods, those targeted by such misinformation continue to suffer harm daily.

Read More

How Information Voids on Search Engines Contribute to the Spread of Misinformation

Former X Staffer Warns of Rising Fake News Crisis on Social Media