Experts warned that social media firms are ill-prepared to combat election misinformation in 2024. These experts call on major tech firms to ensure their platforms are ready to safeguard democracy during the 2024 elections.
Global Coalition for Tech Justice Urges Tech Companies to Protect Democracy
The Global Coalition for Tech Justice, comprising civil society leaders and individuals impacted by tech-related issues, is calling upon major tech giants like Google, TikTok, Meta, and X to take proactive measures in preparing their platforms to protect democracy and user safety in the upcoming elections next year.
In 2024, over 2 billion people are expected to participate in more than 50 elections worldwide, including those in the United States, India, and the European Union.
In July, the coalition reached out to Google, Meta, X, and TikTok, urging them to develop comprehensive election action plans on both a global and country-specific scale. These plans should allocate sufficient resources to safeguard freedoms, rights, and user safety during the 2024 elections. Unfortunately, social media companies did not provide a response or share their plans.
The Coalition Asks Big Tech Companies to Establish Action Plans
The 2024 Action Plans, both on a global scale and for each country holding elections, need to be fully and fairly funded and include key elements. These plans should adhere to international human rights and electoral standards, ensuring that existing and new policies align with these principles. Engagement with election bodies should be free from political influence, and independent election monitors should be supported.
There should be a prompt publication of comprehensive human rights impact assessments or commissioning of such assessments if they have not been conducted yet. These assessments should follow international best practices and be carried out by independent external parties. The results should be transparently integrated into decision-making and shared with stakeholders to better prepare for election periods.
Furthermore, they ask to allocate resources based on harm risk, not market size. Publish investment and staffing data per language and country for trust and safety efforts, justifying resource allocation decisions. Ensure expertise in local contexts, languages, and dialects for content moderation. Prioritize impact assessment, protecting individuals even in challenging human rights markets, to maintain equitable global access to safe online platforms. Address past imbalances in funding, adopt standardized reporting formats, and enable transparency for monitoring by researchers, regulators, civil society, and other stakeholders.
Companies should offer a comprehensive range of tools and measures, including both new ones developed in response to threat assessments and proven tools from markets where they have heavily invested, such as the United States. There should be full transparency about where these tools are implemented and clear justifications for any regional variations.
These measures should be operational throughout the entire electoral process, from the lead-up to the election, through polling day, and even during the weeks or months following the vote. This approach aims to prevent past mistakes where measures were scaled back, potentially facilitating post-election violence and undemocratic actions by malicious actors.
Also, effective measures should be rooted in local expertise and context, necessitating adequate resources for local staff and fostering multi-stakeholder engagement. To address algorithmic limitations in languages other than English, platforms should invest in larger and culturally adept content moderation teams. Ideally, platforms should coordinate efforts to facilitate civil society dialogues across all relevant platforms, avoiding divergent standards.
Moreover, companies should bolster and fund partnerships with fact-checkers, independent media, civil society organizations, and others committed to safeguarding electoral integrity. These collaborations should uphold partners' independence, report meaningful engagement in a standardized format, and involve cooperation with other companies to maximize investments in trust and safety measures and reporting.
Maintaining independence from government and political influence is another plan. This involves full transparency, including the disclosure of company-government interactions, speech suppression requests, and surveillance demands, within legal limits. Policy exemptions benefiting politicians that could harm electoral processes or citizens' rights should be abolished. Politicians should not be exempt from user protection policies, and fact-checking should apply to political advertisements.
In addition to the coalition action plans, it is to establish robust oversight and transparency measures, including providing data access and training for researchers, civil society, independent media, and election monitors to monitor platform activities. Ensure access to Crowdtangle at Meta, equivalent tools at Alphabet, and an open and affordable API at Twitter. Maintain full transparency and accuracy in ad libraries, including targeting parameters, and publish financial information for campaign finance scrutiny. Conduct independent audits on enforcement, covering ad library error rates and algorithmic impact on harm. Companies should transparently share their content moderation policies and enforcement procedures in a standardized format, including notice, review, and appeal mechanisms.
Finally, facilitate accountability by permitting the documentation and archiving of all instances of actual or potential harm on the platforms. This should include documentation to assess the accuracy and effectiveness of harm mitigation measures, enabling both real-time and retrospective accountability efforts.
Big Tech Could Do More to Enhance Multilingual Content Moderation
Katarzyna Szymielewicz, President of the Panoptykon Foundation, a Polish NGO specializing in surveillance technology, emphasizes that big tech companies need to improve content moderation across languages. She highlights the complexity of moderating content effectively in various cultural contexts, especially in languages other than English.
Szymielewicz calls for more significant investments in human moderators to tackle issues like sponsored content, political violence, misogyny, and abuse, which may not be obvious solely based on language. She also emphasizes the importance of platforms consistently addressing disinformation and hate, not just during elections, by implementing measures to limit the reach and visibility of such content and accounts algorithmically.
South Africa has witnessed social media-driven xenophobic violence targeting migrants, refugees, and asylum seekers. In India, a surge in anti-Muslim violence has led the coalition to accuse Meta of disregarding its own hate speech rules related to the ruling Hindu nationalist Bharatiya Janata Party (BJP).
Ratik Asokan, a member of India Civil Watch International, part of the coalition, criticized Meta for not effectively moderating its platforms in India due to a shortage of content moderators and alleged ties to the BJP. Asokan contends that the BJP benefits from hate speech as part of its political strategy and that Meta, despite advocating for human rights, is acting as an ally to the Hindu far right.
Read More:
Activists Alert Tech Companies to US Election-Related Disinformation
The Anticipated Impact of Social Media on the 2024 American Presidential Elections