` `

Image of an Explosion Near the U.S. Pentagon Is AI-Generated

Wesam Abo Marq Wesam Abo Marq
News
23rd May 2023
Image of an Explosion Near the U.S. Pentagon Is AI-Generated
The image is AI-generated (Twitter)

The Claim

An image shows an explosion near the U.S. Pentagon in Washington, DC.

Emerging story

An image purporting to show an explosion close to the U.S. Pentagon in Washington, DC, was widely shared on social media by regular users and well-known media pages.

A supporting image within the article body

Misbar’s Analysis

Misbar investigated the circulating image and found it to be fake.  The image making the rounds is an AI-generated image.

AI-Generated Image of an Explosion Near the Pentagon 

The Pentagon Force Protection Agency and Arlington Fire Department released a joint statement clarifying that no explosion took place near the Pentagon, as claimed.

The tweet reads, “@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

Multiple reports indicate that the image possesses characteristics that strongly suggest it was generated by artificial intelligence (AI).

How the Image Went Viral?

On May 22, the image was first shared by a fake account called Bloomberg Feed, which had a verified blue check mark. However, this account was subsequently suspended.

Additionally, the image gained significant traction after it was shared on Twitter by the Russian state media outlet RT. 

However, RT removed the tweet later.

The Fake Image Caused a Brief Dip in the Stock Market

Following the circulation of the AI-generated image showing an explosion at the Pentagon, there was a brief period of decline in the U.S. stock markets. 

The Dow Jones Industrial Average saw a drop of around 80 points, while the S&P 500 index decreased by 0.26%. However, this dip was short, lasting only a few minutes.

Concerns Rise Over AI’s Ability to Produce Misinformation

Several social media accounts, including some verified ones, swiftly shared the fabricated image, intensifying the state of confusion.

Numerous generative AI tools, such as Midjourney, Dall-e 2, and Stable Diffusion, have the capability to produce realistic images.

These tools are trained using extensive collections of authentic images; however, they may generate unrealistic elements or objects that blend strangely with their surroundings.

Here are some techniques suggested by Aljazeera for identifying AI-generated images:

  1. Acknowledge that news does not occur in isolation.
  2. Identify the uploader: Examine who is sharing the content.
  3. Use open-source intelligence tools: Leverage freely available online tools to verify the origin and authenticity of the image.
  4. Analyze the image and its surroundings: Look for incongruities or inconsistencies within the image or between the image and its purported context.
  5. Observe hands, eyes, and posture: AI-generated images often struggle to accurately reproduce human features and body language. Look for anomalies in these areas as potential signs of a fabricated image.
A supporting image within the article body
Photo Description: A screenshot of Al Jazeera’s article.

Misbar has previously published blog articles regarding the potential risks associated with artificial intelligence and highlighting the concerns it poses for fact-checkers and social media users.

Read More

This Video Does Not Feature the Deployment of NATO Troops in Ukraine

Insulting Chants Digitally Inserted into Video of Biden During Graduation

Misbar’s Classification

Fake

Misbar’s Sources

Read More

Most Read