How easy is it to be fooled by an AI-generated video? Apparently, very easy. Fox News released a segment showing what it claimed were protests by SNAP (Supplemental Nutrition Assistance Program) beneficiaries, using as evidence several clips that were later revealed to have been generated by artificial intelligence. After the mistake was exposed, Fox News added a small correction at the end of the article: “We previously reported on videos that appear to be AI-generated without noting it.”
The original story showed a woman in a grocery store saying, “They cut off my food benefits. I won’t pay for this […] I have kids at home who need to eat.” The quote was featured under the initial headline: “SNAP beneficiaries threaten to ransack stores over government shutdown.” Once it became clear the footage was AI-generated, the title was changed to: “AI videos of SNAP beneficiaries complaining about cuts go viral.”
Unfortunately, major media outlets are increasingly at risk of using — either knowingly or unknowingly — AI-generated material as supposed evidence supporting their narratives. It’s also important to remember that AI-generated content — including material with racist, manipulative, or politically charged undertones — can be weaponized to fuel hatred and create false narratives, deepening misinformation and polarization in society.
The Fox News incident underscores the urgent need for new verification standards across media organizations — including mandatory authenticity checks for video material, metadata analysis, watermark detection, and the use of AI-detection tools to identify generative content.
Without such safeguards, media outlets risk not only losing public trust but also facing legal and reputational consequences. Fox’s misstep should serve as a wake-up call for audiences to question their news sources — and for other newsrooms to learn from this episode to ensure such incidents don’t happen again.
Comments on Reddit also suggest that Fox News may not have been a victim of AI deception at all, but may have intentionally used the AI-generated footage to support its own narrative — one suggesting that the SNAP program spoiled its beneficiaries and failed to deliver meaningful results.

