The AI Slop Fight Between Iran and Israel


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

As Israel and Iran trade blows in a quickly escalating conflict that risks engulfing the rest of the region as well as a more direct confrontation between Iran and the U.S., social media is being flooded with AI-generated media that claims to show the devastation, but is fake.

The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.

If you want to follow a war as it’s happening, it’s easier than ever. Telegram channels post live streams of bombing raids as they happen and much of the footage trickles up to X, TikTok, and other social media platforms. There’s more footage of conflict than there’s ever been, but a lot of it is fake.

A few days ago, Iranian news outlets reported that Iran’s military had shot down three F-35s. Israel denied it happened. As the claim spread so did supposed images of the downed jet. In one, a massive version of the jet smolders on the ground next to a town. The cockpit dwarfs the nearby buildings and tiny people mill around the downed jet like Lilliputians surrounding Gulliver.

It’s a fake, an obvious one, but thousands of people shared it online. Another image of the supposedly downed jet showed it crashed in a field somewhere in the middle of the night. Its wings were gone and its afterburner still glowed hot. This was also a fake.

Image via X.com.
Image via X.com.

AI slop is not the sole domain of anonymous amateur and professional propagandists. The leaders of both Iran and Israel are doing it too. The Supreme Leader of Iran is posting AI-generated missile launches on his X account, a match for similar grotesques on the account of Israel’s Minister of Defense.

New tools like Google’s Veo 3 make AI-generated videos more realistic than ever. Iranian news outlet Tehran Times shared a video to X that it said captured “the moment an Iranian missile hit a building in Bat Yam, southern Tel Aviv.” The video was fake. In another that appeared to come from a TV news spot, a massive missile moved down a long concrete hallway. It’s also clearly AI-generated, and still shows the watermark in the bottom right corner for Veo.

After Iran launched a strike on Israel, Tehran Times shared footage of what it claimed was “Doomsday in Tel Aviv.” A drone shot rotated through scenes of destroyed buildings and piles of rubble. Like the other videos, it was an AI generated fake that appeared on both a Telegram account and TikTok channel named “3amelyonn.”

In Arabic, 3amelyonn’s TikTok channel calls itself “Artificial Intelligence Resistance” but has no such label on Telegram. It’s been posting on Telegram since 2023 and its first TikTok video appeared in April of 2025, of an AI-generated tour through Lebanon, showing its various cities as smoking ruins. It’s full of the quivering lines and other hallucinations typical of early AI video.

But 3amelyonn’s videos a month later are more convincing. A video posted on June 5, labeled as Ben Gurion Airport, shows bombed out buildings and destroyed airplanes. It’s been viewed more than 2 million times. The video of a destroyed Tel Aviv, the one that made it on to Tehran Times, has been viewed more than 11 million times and was posted on May 27, weeks before the current conflict.

Hany Farid, a UC Berkeley professor and founder of GetReal, a synthetic media detection company, has been collecting these fake videos and debunking them. 

“In just the last 12 hours, we at GetReal have been seeing a slew of fake videos surrounding the recent conflict between Israel and Iran. We have been able to link each of these visually compelling videos to Veo 3,” he said in a post on LinkedIn. “It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion.”

The spread of AI-generated media about this conflict appears to be particularly bad because both Iran and Israel are asking their citizens not to share media of destruction, which may help the other side with its targeting for future attacks. On Saturday, for example, the Israel Defense Force asked people not to “publish and share the location or documentation of strikes. The enemy follows these documentations in order to improve its targeting abilities. Be responsible—do not share locations on the web!” Users on social media then fill this vacuum with AI-generated media.

“The casualty in this AI war [is] the truth,” Farid told 404 Media. “By muddying the waters with AI slop, any side can now claim that any other videos showing, for example, a successful strike or human rights violations are fake. Finding the truth at times of conflict has always been difficult, and now in the age of AI and social media, it is even more difficult.”

“We’re committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools,” a Google spokesperson told 404 Media. “Any content generated with Google AI has a SynthID watermark embedded and we add a visible watermark to Veo videos too.”

Farid and his team used SynthID to identify the fake videos “alongside other forensic techniques that we have developed over at GetReal,” he said. But checking a video for a SynthID watermark, which is visually imperceptible, requires someone to take the time to download the video and upload it to a separate website. Casual social media scrollers are not taking the time to verify a video they’re seeing by sending it to the SynthID website.

One distinguishing feature of 3amelyonn and others’ videos of viral AI slop about the conflict is that the destruction is confined to buildings. There are no humans and no blood in 3amelyonn’s  aerial shots of destruction, which are more likely to get blocked both by AI image and video generators as well as the social media platforms where these creations are shared. If a human does appear, they’re as observers like in the F-35 picture or milling soldiers like the tunnel video. Seeing a soldier in active combat or a wounded person is rare.

There’s no shortage of real, horrifying footage from Gaza and other conflicts around the world. AI war spam, however, is almost always bloodless. A year ago, the AI-generated image “All Eyes on Raffah” garnered tens of millions of views. It was created by a Facebook group with the goal of “Making AI prosper.”



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *