What the El Mencho Killing Deepfakes Revealed About Information in a Crisis

When news broke that one of Mexico's most powerful cartel leaders had been killed, the violence that followed spilled into two spaces at once. The streets of Guadalajara, and the internet.

What the El Mencho Killing Deepfakes Revealed About Information in a Crisis
Photo by Markus Spiske / Unsplash

By Audrey Hingle

It’s Sunday, February 22, and a message comes through in our class WhatsApp group. We’re all on the STEaPP programme at UCL: Science Technology Engineering and Public Policy.

A classmate in Guadalajara says there’s unrest following an operation that reportedly took out “El Mencho.” He’s worried and scared, and we’re, of course, all worried for him. He tells us the airport has been overrun. Cartel members are setting fire to cars. He forwards us videos showing what he believes is happening nearby.

They aren’t grainy or obviously manipulated. They’re sharp, detailed, and entirely plausible: burning vehicles, people running, armed men moving through public spaces. Exactly what you would expect this kind of situation to look like.

I message a friend whose family lives nearby: I hope they’re OK.“It’s crazy,” he replies, sending more clips.

At that point, there’s no real reason to doubt what we’re seeing. A real-world event is unfolding, and the visuals match the narrative. The information is coming from people we trust.

Except, the airport, as far as I can tell, wasn’t overrun despite what Laura Loomer might have claimed. And a number of the videos circulating most widely were either misattributed or entirely fake. We’ve had viral AI-generated images before. The “little girl escaping floodwater” is a good example. But those kinds of emotionally charged images tended to circulate after the fact, when we had clearer facts, a better understanding of what was going on, and… it was a photo. Not a video. Most of us still aren’t accustomed to doubting video content in the same way we’ve come to know that still images can be manipulated. Despite that, deepfakes are already distorting how we see global events in real time – including the US-Israel war with Iran.

Says Reuters:

 Unrest did indeed break out in many parts of Mexico as loyalists to El Mencho, the leader of the Jalisco New Generation Cartel, set up roadblocks, torched buses and stores, and attacked gas stations in retaliation for his slaying. 
But online, things looked even worse. Among the false reports: The Guadalajara airport taken over by assassins. A plane on the runway was on fire. Smoke was billowing from a church and multiple buildings in the city of Puerto Vallarta, popular with tourists.
These images, which were reviewed by Reuters, were false but shared tens of thousands of times… Experts said that, in the case of El Mencho's killing, the fake news was being spread at surprising speed not only by unsuspecting users but also in some cases by the cartel itself, in efforts to make its retaliatory wave of violence appear greater and more terrifying than it really was.

Part of what is different now is how quickly fake videos can appear, and how closely they track to real, developing events. In this instance, they didn’t distort a settled narrative. They helped form the initial understanding of what was happening, especially for people outside the region. Violence wasn’t confined to the streets: it extended into the information space, where cartel-linked actors actively shaped the narrative in real time, weaponising panic to make the situation appear far more widespread and uncontrollable than it was.

By the time many newsrooms were catching up, the visual story was already in circulation. Misinformation is no longer just something that follows events. It can now emerge alongside them, filling in gaps before verified information is available. If it is convincing enough, it doesn’t need to replace the truth. It just needs to arrive first. And it’s particularly insidious, making it difficult for people on the ground to know how to behave to stay safe.

There’s a tendency to frame this as a media literacy problem: users should be more critical, more cautious, better at spotting fakes, but that doesn’t hold up in practice. Remember that we’re all masters students studying technology and public policy, we’re far more media-literate than your average person and we all fell for these deep fakes, including my classmate living in the area and my friend with family nearby.

Plus, when something appears to be happening in real time, especially somewhere you live or have a personal connection to, you’re not approaching information as an analyst. You’re reacting as a person. You’re checking on friends, sharing updates, trying to understand what’s going on. The expectation that individuals will pause and verify each piece of content at that moment is unrealistic.

So if training people how to spot AI-generated content isn’t the solution, what is? What information integrity infrastructure needs to exist to support trustworthy media ecosystems when crises like this one unfold?

What would information integrity infrastructure actually look like?

To understand what can be done, I spoke to Jacobo Castellanos, Coordinator of the Technology, Threats, and Opportunities team at WITNESS, an organization that has been preparing journalists, activists, and human rights defenders for the threat of synthetic media for years.

Jacobo agrees that in situations like this, we need to shift responsibility away from individuals like myself and my classmates to be able to spot AI. Instead, we should be asking what information integrity infrastructure is needed to support trustworthy media ecosystems in crisis situations like this one. 

Work on this has been in development for years, says Jacobo. “Specifically, the transparency and detection systems that help reclaim trust by verifiably establishing the source and history of the content we consume.” For these to work we need “regulatory and policy frameworks, thoughtful platform and tool design and governance; interoperable technical standards; and broader governance mechanisms that ensure human rights safeguards and meaningful civil society participation.”

If these had been in place when this incident took place, things could have looked much different. The AI-generated or edited content might have carried embedded signals like content credentials or watermarks that the social media platforms hosting the content could automatically detect and flag for users. 

“Even in cases where malicious actors had stripped this information away, or used tools that did not add these signals (for example, locally run open-source models),” says Jacobo, “platforms could still rely on post-hoc detection systems to identify and flag many of these synthetic videos.” The goal of these mechanisms is not to eliminate all misleading content, but to ensure that when viewers encounter potentially deceptive media, they are given meaningful information to interpret it. “In a situation like this one,” says Jacobo, “that could mean recognizing that a real crisis was unfolding, while also understanding that some of the circulating footage was likely AI-generated and should not be taken at face value.”

Media literacy is still useful in this scenario, but not to identify AI-generated content based on auditory or visual cues. Instead, it’s to understand what provenance of information is, and what it isn’t. And what it means for something to come from a credible source.

And media literacy is for content creators, not just consumers. Journalists, activists, eyewitnesses, and documentarians need to understand how to use provenance tools as they produce content, embedding verifiable information about source, context, and chain of custody from the start. WITNESS calls this "fortifying the truth;” using transparency in documentation not just to debunk fakes, but to make authentic content harder to dismiss or decontextualize.

The C2PA open standards make it technically possible to embed a record of where media came from and what has been done to it, from the moment of capture through to publication. The idea, as the BBC puts it, is to "mark the good stuff," giving audiences a way to verify authentic content so it’s easy to tell what is definitely real in a world where it’s hard to tell what might be fake.

WITNESS notes, though, that a provenance signal is not a guarantee of truth, and that poor design could lead users to read a checkmark as "this is true" when it means something more limited. Platforms and tool designers need to invest in UX research into how these signals are surfaced, particularly in fast-moving situations where people are not approaching content analytically, and across a full range of users globally. It’s worth noting that provenance infrastructure can also introduce new risks, including to the privacy and safety of the journalists and activists it is meant to protect. WITNESS has led a harms and misuse assessment of the C2PA specifications for this reason. They also stress that provenance only works if it travels across platforms, and that responsibility for embedding signals needs to sit with AI developers and tool makers at the point of creation, not left to platforms to sort out after the fact.

The Guadalajara incident was a test my classmates and I didn't know we were taking, and we failed it. Not because we were careless, but because the systems are not yet in place to help us ask the right questions. 

Related: WITNESS researcher Mahsa Alimardani documents how AI-generated imagery is making it close to impossible to establish basic facts about the killing of children in Iran. Not because every image is fake, but because the question of what is real has itself become a weapon. https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/?gift=uIKGDCkFqgBMhOUTpelOoET7ItQfiZwDDC-dXtqJOlM


Automating Justice, Designing Power: Reflections from the Commission on the Status of Women

IX's Mallory Knodel spoke at the UN's 70th Commission on the Status of Women on whether AI can expand women's access to justice, arguing that AI governance starts in design not regulation, that access alone is not empowerment, and that feminist principles are essential to preventing AI from repeating the exploitative patterns of surveillance capitalism. She also makes the case for interoperability and multistakeholder governance as the missing infrastructure layer for AI.

Want to appear here? Sponsor a newsletter.


Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber
🚨
Stop press! Do you enjoy our links? In a few weeks, the links roundup will be available to paid Internet Exchange subscribers only. For a limited time, we’re offering 50% off an annual subscription. Normally $50 per year: right now you can subscribe for $25. Become a paid subscriber today.

Open Social Web

  • The first large-scale study of Bluesky's community moderation presented at IETF  Decentralization of the Internet Research Group (DINRG) finds that blocklists affect the visibility of over 90% of content but are controlled by a tiny fraction of users, and that blocking does not decrease the popularity and activity of the blocked users, and has a limited effect on the social graph. https://openaccess.city.ac.uk/id/eprint/36822/ 

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share!