Note: The views and opinions expressed in blog/editorial posts are those of the author. They do not purport to reflect the views or opinions of Misbar.
The internet and social media are often considered as the main perpetrators of misinformation. A Pew Research Service study from January revealed that over 86 percent, or at least eight-in-ten U.S. adults, get their news from a smartphone. Initially, tech companies did not want to take responsibility or be held accountable for misinformation. However, following the mounting pressures from activists, experts, and governments, more initiatives are being taken to counter misinformation.
Increased Calls for Tech Companies to be Held Accountable for Misinformation
Following the victory of Donald Trump, it became apparent that there was a Russian misinformation campaign on Facebook, aiming to influence the voting habits of Americans. Initially, Facebook denied any wrongdoing. The social media giant’s chief executive Mark Zuckerberg responded by saying that only a very small amount of content on Facebook was fake news or hoaxes, and that the idea that fake news on the platform could influence the election in any way was a “crazy idea.” Zuckerberg added that “Facebook shouldn’t be the arbiter of truth of everything that people say online. Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.” Implying that he did not want Facebook to have to determine if certain content was misinformation and whether to remove it or not. The damage done by online misinformation is not limited to the U.S. and extends to include medical health disinformation, Brexit, the Brazilian elections, the Rohingya genocide and the Ukraine and Russia conflict. More recently, there has been increasing pressure on tech companies to be held responsible and accountable from governments, experts, and other public figures such as Barack Obama. In the European Union (EU), The 2022 Code of Practice on Disinformation was launched.
Anti-Discrimination Code in the EU
The Code of Practice on Disinformation is a pioneering tool by the EU. Signatories in the industry agreed, initially in 2018, on self-regulatory standards to fight disinformation. The revision process commenced in June 2021. On June 16, 2022, the revised Code, which will become a part of a broader regulatory framework, was presented. This will be in combination with the legislation on Transparency and Targeting of Political Advertising and the Digital Services Act. The tech companies expected to sign the Code include Facebook, Twitter, Google, Microsoft and TikTok following allowances they have made on data sharing to specific countries to address misinformation. Under the Code, key players such as online platforms, emerging and specialised platforms, players in the advertising industry, fact-checkers, researchers, and civil society organisations collaborated and produced a “Strengthened Code of Practice on Disinformation.”
The Strengthened Code of Practice
Signatories of the Code can decide which commitments they sign up to and it is their responsibility to ensure the effectiveness of their commitments’ implementation. The Strengthened Code of Practice contains 44 commitments and 128 specific measures in various areas, which include: Demonetisation of providers of misleading information, ensuring easy identification and transparency of political advertisements, empowering users, improving cooperation with fact-checkers, and providing researchers with more access to platform data.
Realising how critical it is to ensure the Code is sustainable for the long term, the signatories agreed to create a Transparency Centre and permanent Task Force to continue collaborating on issues. This will allow the public to access the overview of the implementation of the Code’s measures and provide regular updates on relevant data and policies. The Task Force comprises the European Digital Media Observatory, the European Regulators' Group for Audio-visual Media Services, the European External Action Service, as well as representatives of signatories. It will be chaired by the EU Commission. Lastly, the Code includes a strengthened monitoring framework that uses qualitative reporting rudiments and service-level indicators to measure the efficiency of its implementation.
Signatories will be given six months to implement and monitor the measures they agreed to. The permanent Task Force will evaluate the progress made in implementing the Code on a regular basis, based on the granular qualitative and quantitative data that signatories are expected to provide. Moreover, the Task Force will oversee and adapt the commitments considering technological, sociological, market, and legal developments, meeting as necessary and at least every six months. While the Code initially started as completely voluntary it aims to eventually become a Code of Conduct and mitigation measure under the Digital Services Act.
This Code of Practice on Disinformation can set the foundation for other countries to follow in holding tech companies responsible for curbing misinformation without restricting or impeding free speech.
Misbar’s Sources