In an article by Gretel Kahn, a member of the editorial team at Reuters Institute, University of Oxford, a thought-provoking question was raised: Will AI-generated images pose a crisis for fact-checkers?
Gretel Kahn interviewed journalists, experts, and fact-checkers to evaluate the potential risks of AI-generated visuals. As the adage "seeing is believing" becomes less reliable in the face of this technology, what are the ramifications of misinformation? How will journalists and fact-checkers, who work to debunk hoaxes, be affected? Will media outlets get submerged by propaganda and fake news?
Al-Generated Images And The New Misinformation Threats
In recent weeks, a series of unlikely images gained widespread attention on the internet, such as former U.S. President Donald Trump being arrested, Pope Francis sporting a fashionable white puffer coat, and Elon Musk appearing to be in a close relationship with General Motors CEO Mary Barra.
However, what makes these images truly noteworthy is that they are all fabrications generated by artificial intelligence software.
AI image generators like DALL-E and Midjourney have gained popularity due to their user-friendly nature, allowing anyone to create new images through text prompts. While DALL-E claims to have over 3 million users, Midjourney's user base remains undisclosed but has recently halted free trials due to an overwhelming influx of new users.
Although generative AI is currently most commonly used for satire and entertainment, its technology is rapidly becoming more sophisticated . This has prompted prominent researchers, technologists, and public figures to sign an open letter calling for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, an AI model created by Open AI. They pose the question, "Should we allow machines to inundate our information channels with propaganda and falsehoods?"
Al-Generated Images of Trump’s Arrest Go Viral
Journalist Eliot Higgins used the AI image generator Midjourney to create fictional images depicting former U.S. President Donald Trump's criminal conviction, which quickly went viral. Higgins was subsequently locked out of the AI image generator's server.
Higgins stated that the rapid virality of the thread he posted using AI-generated images highlights how easily images that cater to people's interests and biases can spread.
Fact-checkers are concerned about the rise of AI-generated images, as a significant portion of fact-checking involves verifying images or videos. Visual disinformation, fueled by compelling and emotive images, can be challenging to debunk and can strongly influence audience perceptions.
In this matter, Marilín Gonzalo, a technology columnist at Newtral, an autonomous Spanish organization that focuses on fact-checking, further emphasized a key point. She highlighted that even if you provide someone with multiple arguments for a certain viewpoint during a lengthy conversation, it may still be challenging to change their perception if they are shown an image that aligns with their beliefs.
Unveiling the Mystery of Generative Images
Valentina de Marval, a Chilean journalist and professor of journalism at Universidad Diego Portales with previous experience in fact-checking for agencies like AFP, Chicas Poderosas, and LaBot Chequea, shares concerns about the rise of AI-generated images. Despite the existence of clues that can reveal the fake nature of these images, such as hands, teeth, or ears that are not drawn realistically, De Marval worries that the rapid advancement of these AI models may soon make these indicators obsolete.
She notes that artificial intelligence could potentially learn to draw these features more accurately, including imperfections in the skin, within just a few months or even days.
Experts like Felix Simon, a communication researcher and PhD student at the Oxford Internet Institute, caution against taking an alarmist view on the proliferation of AI-generated imagery, suggesting that it does not necessarily lead to increased belief in those images and a truth crisis. Simon points out that the relationship between images and truth has always been unstable, and the emergence of generative AI is just a continuation of that trend. He believes that people and institutions will develop defense mechanisms to verify the authenticity of images, and news organizations may take greater efforts to fact-check images before disseminating them.
According to Simon, concerns about image-based information warfare and fake news can be traced back to the introduction of photography in newsrooms. In more recent times, worries about the impact of deep fakes have persisted for years. Even a few years ago, similar concerns rose with the accessibility of Photoshop to the public.
Fact-Checkers Face Challenges as Fake Images Spread
Higgins suggests that AI-generated images are likely to remain confined to social media platforms and are unlikely to gain significant traction in mainstream media. He also believes that fake images will be debunked as they go viral.
Fact-checkers are worried about the speed at which software like DALL-E and Midjourney can generate fake images and videos, unlike Photoshop or deep fake software. These generated media can spread rapidly through social media platforms, creating a "digital fire" of viral distribution as Gonzalo called it.
Fact-checkers are concerned about the challenge of verifying information in a timely manner to avoid information vacuums, as these fake images and videos can quickly circulate in WhatsApp groups and other messaging platforms.
De Marval thinks fact-checkers will have to adapt their methodology and rhythms to be able to catch up to the potential influx of synthetic images. “Verification methods have to be adapted and streamlined in all newsrooms so they can process videos and images before showing them,” she suggests.
De Marval links disinformation to declining institutional trust, including the loss of prestige in journalism and institutions. She emphasizes that without enough journalists and with discredited media and state institutions, disinformation will continue to circulate.
Artificial Intelligence and Misinformation
While artificial intelligence (AI) and generative technologies may contribute to the production of mis- and disinformation on a larger scale, claims that they will lead to the end of truth are problematic. According to Simon, people may not necessarily be more easily fooled by misleading information but may become more skeptical of information in general, including trustworthy information.
This has concerning implications for a media environment where trust in news is already declining. Recent reports have shown that trust in news is on the decline, with lower trust in news on social media, search engines, and messaging apps compared to traditional news media. Many people also perceive false and misleading information and irresponsible data use on these platforms as significant problems.
According to Higgins, recent events have raised awareness about the capabilities of generative systems, but there is a risk that increased skepticism may lead to people refusing to believe any image they see, swinging too far in the opposite direction.
Tech Firms: What Actions to Take?
The responsibility of AI startups for distinguishing their generated content from real images and videos has been brought into question. Calls for increased transparency, such as watermarks, have been advocated by some for easier identification of AI-generated content.
Additionally, news organizations and tech companies are developing tools like cryptographic verification marks and content credentials to indicate the authenticity and source of media content. Adobe's image-generating tool, Firefly, will also include content credentials to disclose whether an image was created by AI or not, citing the fight against misinformation as a driving factor for this initiative.
Ethical concerns arise about the data used to train AI models, as viral examples often depict real people. Midjourney has limited the image generation of certain public figures, such as China's president Xi Jinping, not due to privacy concerns but to minimize controversy. This raises questions about the ethical implications of using real people's data without their consent and highlights the need for responsible data usage in AI model training, as Higgins suggests.
Many AI generators, including DALL-E, are trained using large amounts of text-image pairs from the internet. Gonzalo emphasizes that even public figures like Donald Trump have personal data rights, and just because data is available on the internet does not mean that individuals should have to give up their right to data protection.
Tackling the Crisis of AI-Generated Images: Expert Advice
Media literacy and personal fact-checking techniques are key to diminishing the impact of AI-based misinformation, according to experts. Journalists and fact-checkers are taking an education-driven approach, working with schools and universities to train students and teachers in media literacy skills, including fact-checking and verification techniques. Contextual analysis and critical questioning of politically incendiary images are emphasized.
Educating people is crucial in the fight against misinformation, as even thorough fact-checking efforts by newsrooms may be ineffective without an educated audience, says De Marval, who teaches a fact-checking course for university students.
Misbar’s Sources: