In the era of technology, democracy's future is a subject of worry among experts, mainly due to the emergence of generative artificial intelligence (AI). The 2016 and 2020 elections witnessed a deluge of misinformation and unfounded assertions on social media platforms, intensifying societal rifts and eroding faith in the democratic system.
As we approach the 2024 presidential elections, the progress in AI represents a new concern, as it has the potential to magnify and disseminate disinformation even further.
Misinformation During the 2016 and 2020 Elections
Leading up to the 2016 presidential election, social media platforms became conduits for misinformation as they were exploited by far-right activists, foreign influence campaigns, and fake news sites to disseminate false information and exacerbate divisions.
Social media users actively spread numerous messages that denigrated candidates and distorted the factual information presented to voters. One prominent example was the presence of the fake and heavily biased pro-Trump site called "70news," which cleverly redirected users to a WordPress blog page asserting that Trump had triumphed in both the popular vote and the Electoral College. Nevertheless, at that time, Trump had secured victory in the Electoral College, but he did not win the popular vote—Hillary Clinton did.
During the 2020 election, conspiracy theories and unsubstantiated allegations of voter fraud were rampant, reaching millions of people through amplification on social media and consequently fueling an anti-democratic movement that sought to overturn the election.
The influence of fake news on the 2020 U.S. presidential election was notably significant, and Trump himself promoted such misinformation. It is widely acknowledged that Trump frequently used his personal Twitter account to express his opinions and vent frustrations, and during this election, he continued to use this platform to make baseless claims. His tweets often contained disinformation and misinformation, which were employed to mobilize his supporters and prompt protests against Biden's victory in the election.
AI Could Erode Democracy in the 2024 U.S. Elections
As the 2024 presidential election approaches, experts caution that advancements in AI could revive and enhance disinformation strategies used in the past.
According to these experts, AI-generated disinformation not only poses a threat to deceiving audiences but also exacerbates the challenges faced by an already embattled information ecosystem by inundating it with inaccuracies and deceitful content.
Ben Winters, a senior counsel at the Electronic Privacy Information Center, emphasizes that this trend is likely to result in diminishing levels of trust, making it more difficult for journalists and other information disseminators to effectively deliver accurate information. Ultimately, the use of AI in propagating disinformation is not expected to have any positive impact on the information ecosystem.
AI Is Utilized to Create Political Content
Recent advancements in artificial intelligence (AI) have given rise to powerful tools capable of producing photorealistic images, mimicking human-like voice audio, and generating convincingly natural text. Companies like OpenAI have made these technologies accessible to the mass market. However, despite their potential to revolutionize various industries and exacerbate existing inequalities, they are increasingly being utilized to create political content.
In recent months, examples of AI-generated political content have come to the forefront. For example, an AI-generated image depicting an explosion at the Pentagon briefly impacted the stock market.
Furthermore, AI-generated images portraying Donald Trump resisting arrest by police officers spread widely on social media.
The Republican National Committee even released an entirely AI-generated ad depicting imaginary disasters that would supposedly occur if Biden were re-elected. Such developments have raised concerns, with the American Association of Political Consultants warning about the potential threat posed by video deepfakes to democracy.
With the advent of generative AI, creating such content has become accessible to anyone with basic digital skills. This technology lacks adequate guardrails and effective regulations to control its use. As a result, experts warn that there is a democratization and acceleration of propaganda, especially concerning primary election years in various countries.
Foreign Countries Influencing U.S. Elections Is Easier Than Ever
The potential harms of AI in elections encompass a range of concerns that echo past decades of election interference. AI-powered tools make it easier to create deceptive content, such as social media bots impersonating real voters, manipulated videos or images, and deceptive robocalls that are harder to detect.
Foreign countries now have new opportunities to influence or undermine U.S. elections using AI, eroding language barriers and producing more believable texts with fluent-sounding language models. This technology can intensify voter suppression campaigns, targeting marginalized communities with personalized misinformation through audio that mimics trusted personalities.
AI could also create false constituencies by generating letter-writing campaigns or fake engagement, making it challenging to discern genuine voter responses to issues. Research experiments demonstrate that responses to AI-generated letters are nearly indistinguishable from those written by humans, raising concerns about the manipulation of public opinion and democratic processes.
AI-Generated Content to Ridicule Political Candidates
AI-generated content is being used in political campaigns, including deepfake videos and images, to mock opponents and spread disinformation. Similar tactics were used during previous elections by Trump's campaign, relying heavily on memes and deceptively edited videos targeting opponents. Last month, the DeSantis campaign shared AI-generated images of Trump embracing and kissing Anthony Fauci for political purposes.
The impact of AI-generated disinformation on elections is still uncertain and challenging to measure. While it's unclear what role artificial intelligence will play in the upcoming election, concerns arise about its potential to pollute the information ecosystem and erode public trust in online information consumption. Monitoring and countering the effects of misleading AI-generated content present new challenges for researchers and election observers.
Worrying indications arise as generative AI and social media platforms' content moderation measures come into focus. YouTube, Instagram, and Twitter have scaled back their content moderation policies, sparking concerns about the dissemination of disinformation. Researchers are skeptical about the effectiveness of media literacy and conventional fact-checking techniques in tackling the vast amount of misleading content generated by AI. The proliferation of AI-generated content poses a fresh challenge in the ongoing fight against misinformation.
AI-Generated Images Create a Crisis for Fact-Checkers
The speed at which AI-generated images and videos are created surpasses fact-checkers’ ability to verify and debunk them, raising concerns about the spread of misinformation. The public's perception of AI's capabilities further erodes trust, leading them to believe that anything could be artificially generated.
While some generative AI services, such as ChatGPT, have implemented measures to prevent misinformation generation and can block such usage, their effectiveness remains uncertain, and several open-source models lack similar safeguards.
Experts stress the lack of adequate control over the dissemination of AI-generated content, as various methods like robocallers, robo-emailers, and mass email platforms are easily accessible and unrestricted.