What the El Mencho Killing Deepfakes Revealed About Information in a Crisis
When news broke that one of Mexico's most powerful cartel leaders had been killed, the violence that followed spilled into two spaces at once. The streets of Guadalajara, and the internet.
It’s Sunday, February 22, and a message comes through in our class WhatsApp group. We’re all on the STEaPP programme at UCL: Science Technology Engineering and Public Policy.
A classmate in Guadalajara says there’s unrest following an operation that reportedly took out “El Mencho.” He’s worried and scared, and we’re, of course, all worried for him. He tells us the airport has been overrun. Cartel members are setting fire to cars. He forwards us videos showing what he believes is happening nearby.
They aren’t grainy or obviously manipulated. They’re sharp, detailed, and entirely plausible: burning vehicles, people running, armed men moving through public spaces. Exactly what you would expect this kind of situation to look like.
I message a friend whose family lives nearby: I hope they’re OK.“It’s crazy,” he replies, sending more clips.
At that point, there’s no real reason to doubt what we’re seeing. A real-world event is unfolding, and the visuals match the narrative. The information is coming from people we trust.
Except, the airport, as far as I can tell, wasn’t overrun despite what Laura Loomer might have claimed. And a number of the videos circulating most widely were either misattributed or entirely fake. We’ve had viral AI-generated images before. The “little girl escaping floodwater” is a good example. But those kinds of emotionally charged images tended to circulate after the fact, when we had clearer facts, a better understanding of what was going on, and… it was a photo. Not a video. Most of us still aren’t accustomed to doubting video content in the same way we’ve come to know that still images can be manipulated. Despite that, deepfakes are already distorting how we see global events in real time – including the US-Israel war with Iran.
Unrest did indeed break out in many parts of Mexico as loyalists to El Mencho, the leader of the Jalisco New Generation Cartel, set up roadblocks, torched buses and stores, and attacked gas stations in retaliation for his slaying.
But online, things looked even worse. Among the false reports: The Guadalajara airport taken over by assassins. A plane on the runway was on fire. Smoke was billowing from a church and multiple buildings in the city of Puerto Vallarta, popular with tourists.
These images, which were reviewed by Reuters, were false but shared tens of thousands of times… Experts said that, in the case of El Mencho's killing, the fake news was being spread at surprising speed not only by unsuspecting users but also in some cases by the cartel itself, in efforts to make its retaliatory wave of violence appear greater and more terrifying than it really was.
Part of what is different now is how quickly fake videos can appear, and how closely they track to real, developing events. In this instance, they didn’t distort a settled narrative. They helped form the initial understanding of what was happening, especially for people outside the region. Violence wasn’t confined to the streets: it extended into the information space, where cartel-linked actors actively shaped the narrative in real time, weaponising panic to make the situation appear far more widespread and uncontrollable than it was.
By the time many newsrooms were catching up, the visual story was already in circulation. Misinformation is no longer just something that follows events. It can now emerge alongside them, filling in gaps before verified information is available. If it is convincing enough, it doesn’t need to replace the truth. It just needs to arrive first. And it’s particularly insidious, making it difficult for people on the ground to know how to behave to stay safe.
There’s a tendency to frame this as a media literacy problem: users should be more critical, more cautious, better at spotting fakes, but that doesn’t hold up in practice. Remember that we’re all masters students studying technology and public policy, we’re far more media-literate than your average person and we all fell for these deep fakes, including my classmate living in the area and my friend with family nearby.
Plus, when something appears to be happening in real time, especially somewhere you live or have a personal connection to, you’re not approaching information as an analyst. You’re reacting as a person. You’re checking on friends, sharing updates, trying to understand what’s going on. The expectation that individuals will pause and verify each piece of content at that moment is unrealistic.
So if training people how to spot AI-generated content isn’t the solution, what is? What information integrity infrastructure needs to exist to support trustworthy media ecosystems when crises like this one unfold?
What would information integrity infrastructure actually look like?
To understand what can be done, I spoke to Jacobo Castellanos, Coordinator of the Technology, Threats, and Opportunities team at WITNESS, an organization that has been preparing journalists, activists, and human rights defenders for the threat of synthetic media for years.
Jacobo agrees that in situations like this, we need to shift responsibility away from individuals like myself and my classmates to be able to spot AI. Instead, we should be asking what information integrity infrastructure is needed to support trustworthy media ecosystems in crisis situations like this one.
Work on this has been in development for years, says Jacobo. “Specifically, the transparency and detection systems that help reclaim trust by verifiably establishing the source and history of the content we consume.” For these to work we need “regulatory and policy frameworks, thoughtful platform and tool design and governance; interoperable technical standards; and broader governance mechanisms that ensure human rights safeguards and meaningful civil society participation.”
If these had been in place when this incident took place, things could have looked much different. The AI-generated or edited content might have carried embedded signals like content credentials or watermarks that the social media platforms hosting the content could automatically detect and flag for users.
“Even in cases where malicious actors had stripped this information away, or used tools that did not add these signals (for example, locally run open-source models),” says Jacobo, “platforms could still rely on post-hoc detection systems to identify and flag many of these synthetic videos.” The goal of these mechanisms is not to eliminate all misleading content, but to ensure that when viewers encounter potentially deceptive media, they are given meaningful information to interpret it. “In a situation like this one,” says Jacobo, “that could mean recognizing that a real crisis was unfolding, while also understanding that some of the circulating footage was likely AI-generated and should not be taken at face value.”
Media literacy is still useful in this scenario, but not to identify AI-generated content based on auditory or visual cues. Instead, it’s to understand what provenance of information is, and what it isn’t. And what it means for something to come from a credible source.
And media literacy is for content creators, not just consumers. Journalists, activists, eyewitnesses, and documentarians need to understand how to use provenance tools as they produce content, embedding verifiable information about source, context, and chain of custody from the start. WITNESS calls this "fortifying the truth;” using transparency in documentation not just to debunk fakes, but to make authentic content harder to dismiss or decontextualize.
The C2PA open standards make it technically possible to embed a record of where media came from and what has been done to it, from the moment of capture through to publication. The idea, as the BBC puts it, is to "mark the good stuff," giving audiences a way to verify authentic content so it’s easy to tell what is definitely real in a world where it’s hard to tell what might be fake.
WITNESS notes, though, that a provenance signal is not a guarantee of truth, and that poor design could lead users to read a checkmark as "this is true" when it means something more limited. Platforms and tool designers need to invest in UX research into how these signals are surfaced, particularly in fast-moving situations where people are not approaching content analytically, and across a full range of users globally. It’s worth noting that provenance infrastructure can also introduce new risks, including to the privacy and safety of the journalists and activists it is meant to protect. WITNESS has led a harms and misuse assessment of the C2PA specifications for this reason. They also stress that provenance only works if it travels across platforms, and that responsibility for embedding signals needs to sit with AI developers and tool makers at the point of creation, not left to platforms to sort out after the fact.
The Guadalajara incident was a test my classmates and I didn't know we were taking, and we failed it. Not because we were careless, but because the systems are not yet in place to help us ask the right questions.
Related: WITNESS researcher Mahsa Alimardani documents how AI-generated imagery is making it close to impossible to establish basic facts about the killing of children in Iran. Not because every image is fake, but because the question of what is real has itself become a weapon. https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/?gift=uIKGDCkFqgBMhOUTpelOoET7ItQfiZwDDC-dXtqJOlM
Automating Justice, Designing Power: Reflections from the Commission on the Status of Women
IX's Mallory Knodel spoke at the UN's 70th Commission on the Status of Women on whether AI can expand women's access to justice, arguing that AI governance starts in design not regulation, that access alone is not empowerment, and that feminist principles are essential to preventing AI from repeating the exploitative patterns of surveillance capitalism. She also makes the case for interoperability and multistakeholder governance as the missing infrastructure layer for AI.
Want to appear here? Sponsor a newsletter.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Open Social Web
- The first large-scale study of Bluesky's community moderation presented at IETF Decentralization of the Internet Research Group (DINRG) finds that blocklists affect the visibility of over 90% of content but are controlled by a tiny fraction of users, and that blocking does not decrease the popularity and activity of the blocked users, and has a limited effect on the social graph. https://openaccess.city.ac.uk/id/eprint/36822/
Internet Governance
- The US revoked visas of three Chilean officials for merely considering a Chinese undersea cable project. A sign, argue Jorge Heine and Juan Ortiz Freuler, that Washington is willing to weaponise South America's total dependence on US internet infrastructure to enforce its Monroe Doctrine in the digital age. https://www.techpolicy.press/untethering-south-america-from-us-cables-after-rubio-pressures-chile-in-transpacific
- Important presentation by UCLA's Lixia Zhang at IETF 125 argues that AI agents represent a fundamental new challenge for internet infrastructure.
- The slide deck: https://datatracker.ietf.org/meeting/125/materials/slides-125-irtfopen-on-ai-agent-communication-00
- The recording (starts at around 1:25) https://youtu.be/h6ZLrNRD3bU?list=PLC86T-6ZTP5gi5EKPqJf3lneksxOTPetP&t=5089
- A podcast deep dive into the law, history, and geopolitics of submarine cables. Douglas Guilfoyle and Tamsin Phillipa Paige are joined by Dr Tara Davenport (NUS Centre for International Law) and Dita Liliansa (UNSW Sydney). https://soundcloud.com/calledtothebar/66-submarine-cables
- A new paper mapping investment in 110 European AI startups finds that the EU's strategy for technological sovereignty is empowering a select group of "patriotic billionaires" as crucial intermediaries between private capital and the state, with significant implications for who shapes AI policy in Europe. https://journals.sagepub.com/doi/10.1177/10245294261429545
- Connected by Data's Tim Davies argues that public voices make up less than 1% of content at major AI summits and puts out a call for collaborators to change that, starting with the UN Global Dialogue on AI Governance in Geneva in July. https://connectedbydata.org/blog/2026/03/06/what-if
- Content from PAIRS Online (Participatory AI Research & Practice Symposium) available on their site. https://www.pairs.site/PAIRS-Online-17th-February-2e8260e24e1a8045bb31f64d3c74c592
Digital Rights
- A UK parliamentary petition urges the government to reject proposals banning VPN use by children, warning that enforcement would rely on invasive ID checks and cause serious collateral damage to privacy and security for all VPN users. https://petition.parliament.uk/petitions/754408
- A new draft study provides the first large-scale look at age verification providers on the web, finding that US state laws are causing measurable internet balkanisation, that compliance is low, and that the dominant provider (Yoti) creates serious privacy risks, directly challenging assumptions the Supreme Court relied on when upholding age verification laws last year. https://mikespecter.com/assets/pdf/AgeVerification.pdf
- New research from Tajikistan documents how technology-facilitated gender-based violence extends and amplifies offline patriarchal control over women, in one of the first studies to map the scope of the problem in the country. https://firn.genderit.org/research/i-am-drowning-under-weight-hatred-scope-and-nature-technology-facilitated-gender-based
- WhatsApp is rolling out parent-managed accounts for pre-teens, giving parents control over contacts, groups, and privacy settings while keeping all conversations end-to-end encrypted. https://blog.whatsapp.com/introducing-parent-managed-accounts-on-whatsapp
Technology for Society
- Researchers Ghiwa Sayegh and Sabiha Allouche ask what decolonial resistance looks like in an era of AI-automated annihilation, theorizing "endless genocide" as a recurring condition of settler colonialism and asking what it means to sabotage its digital tools. https://firn.genderit.org/research/we-are-interruption-technology-militancy-and-endless-genocide
- Journalism professor Diego García Ramírez argues that Latin American media is trapped on a "hamster wheel" of platform dependency, and that Big Tech has captured not just content distribution but political lobbying, academic research, and the very concept of digital sovereignty itself. https://networkcultures.org/blog/2026/03/06/digital-tribulations-11-the-hamster-wheel-on-the-platformization-of-the-colombia-media-system
- A study of food delivery riders in London and São Paulo finds that platform companies reproduce rather than transcend racial inequality. https://www.sciencedirect.com/science/article/pii/S2666378326000103?via%3Dihub
- University of Chicago art historian Patrick R. Crowley argues that AI model collapse is not a new problem but the latest iteration of an ancient one, and that generative AI's feedback loops of probability and prediction are reshaping what counts as real, plausible, and true. https://www.artforum.com/features/patrick-r-crowley-mimesis-1234743983
- A new report from MediaJustice maps how tech oligarchs are capturing the US media system through ownership, funding dependency, and platform control, and argues the communities with the most to lose are Black, brown, and Indigenous audiences whose local outlets are being squeezed out while larger newsrooms sign AI licensing deals that make honest coverage of their funders harder to produce. https://mediajustice.org/wp-content/uploads/2026/03/MediaJustice-Media-Capture-Report.pdf
- WITNESS researcher Mahsa Alimardani documents how AI-generated imagery is making it close to impossible to establish basic facts about the killing of children in Iran. Not because every image is fake, but because the question of what is real has itself become a weapon. https://www.theatlantic.com/ideas/2026/03/ai-imagery-iran-war/686347/?gift=uIKGDCkFqgBMhOUTpelOoET7ItQfiZwDDC-dXtqJOlM
Privacy and Security
- Meta has quietly said it will end support for encrypted messaging on Instagram in a matter of weeks. Mozilla has a petition calling on it to keep encryption for Instagram Messages. https://www.mozillafoundation.org/en/petitions/keep-encryption-for-instagram
- But it’s not all bad. Signal creator Moxie Marlinspike says his AI privacy startup Confer, which Mallory wrote about last week, will integrate its end-to-end encryption technology into Meta AI, framing today's AI chat apps as the largest and most sensitive centralized data lakes ever built and arguing the moment requires acting at scale. https://confer.to/blog/2026/03/encrypted-meta
- The UK continues to go after encryption. The UK's NCA chief says technology is reshaping crime itself, and called on tech companies to design encrypted apps with lawful access built in. Or as cryptographers call it, to break encryption. https://www.computerweekly.com/news/366640462/Technology-accelerating-crime-boosts-case-for-national-police-service-says-NCA-chief
- The Real World Crypto Symposium 2026 was last week in Taipei. Watch the full recording of the event on YouTube. https://www.youtube.com/live/v_AFtbWr1bY
- California physician Oni Blackstock argues that protecting patients from ICE requires more than keeping agents out of hospitals. Health systems need to map where patient data goes, exclude vendors that also contract with ICE, and tell patients clearly what happens to their information after each visit. https://www.sacbee.com/opinion/op-ed/article314938351.html
Upcoming Events
- PAIRS: Participatory AI Research & Practice Symposium Community calls start March 23. https://www.pairs.site/PAIRS-Community-Calls-325260e24e1a801d89abc76fec494c11
- Book Talk: Design for Privacy by Robert Stribley. Are your designs protecting—or exposing—your users? In Design for Privacy (published by Rosenfeld Media), you’ll uncover how shifting technologies threaten personal data and what that means for your work. March 25. New York, NY. https://www.eventbrite.com/e/book-talk-design-for-privacy-by-robert-stribley-tickets-1984635380849
- PDN Pro-Social. Building a Prosocial Platform for Local Communities, Roundabout with New_ Public. March 26. Online. https://luma.com/5vzxn9vn
- All Tech Is Human is holding an all-day workshop in Manhattan on building an inter-party trust framework for AI. March 26, New York, NY. https://docs.google.com/forms/d/e/1FAIpQLSeidCs4-ifuG4iZxi5kh3QBAGuh_UQBksPIzhoTzabTDblaHg/viewform
- ATmosphereConf is THE global AT Protocol community conference. March 26-29, Vancouver, Canada and Online. https://atmosphereconf.org
- "Can Middleware Save Social Media?" Middleware are third-party tools that sit between users and platforms that proponents say give people control over what they see and share online. Skeptics warn they raise privacy concerns and that more fundamental changes are needed. This event asks: Can middleware really deliver the improvements that its boosters envision? March 27. https://knightcolumbia.org/events/can-middleware-save-social-media
- ITU Workshop on Trustable and Interoperable Digital Identities for Human and Agentic AI. March 30-31, Geneva and online. https://www.itu.int/en/ITU-T/Workshops-and-Seminars/2026/0330/Pages/default.aspx
- Palestine Digital Activism Forum (PDAF) 2026 hosted by 7amleh – The Arab Center for the Advancement of Social Media. An amazing speaker lineup! March 30-31, Online. https://events.ringcentral.com/events/pdaf-2026
- Meme-tivism: Rethinking AI's Environmental Impact. A hands-on workshop for AI and ML practitioners exploring meme-making as a tool for climate advocacy and rethinking the environmental footprint of AI systems. April 9, 5:30pm GMT, London, UK. https://luma.com/rkuwsrn6
- IETF AI Preferences Working Group interim meeting, working on a standard to let websites signal whether their content can be used for AI training. April 14-16, Toronto and online. https://ietf-wg-aipref.github.io/wg-materials/interim-26-04/arrangements.html
- Mobilize supporters & build campaign momentum. A one-day online accelerator for NGOs stuck on how to mobilise people, build momentum, and turn ideas into action. April 15, Online. https://www.resource-alliance.org/event/creative-organising-lab
- Take Back Tech 3 is a gathering for organizers, artists, tech workers, academics, lawyers, and more to rally together and strategize our next power-building moves. April 17-19, Atlanta, GA. https://www.takebacktech.com
- SplinterCon returns to host a 0-day event at RightsCon in Lusaka, Zambia. May 5th, Lusaka, Zambia. https://splintercon.net/events/rightscon-lusaka
- Data for Peace Conference, a three-day conference connecting researchers, peacebuilders, policymakers, data providers, humanitarian actors, and peace technologists to strengthen how data and technology supports violence prevention, anticipatory action, and crisis response.n June 15-17. https://www.uu.se/en/department/peace-and-conflict-research/collaboration/data-for-peace-conference
Careers and Funding Opportunities
- Anthropic: Enforcement Operations Lead. Washington, DC. https://job-boards.greenhouse.io/anthropic/jobs/5137185008
- Schmidt Sciences:Program Scientist, Agents. New York, NY. https://www.linkedin.com/jobs/view/4383878716
- IGF: 2026 Fellowship Programme. Geneva, Switzerland. https://www.intgovforum.org/en/content/igf-2026-fellowship-programme
- UN Global Pulse: Technical Program Officer – Data and AI products for Humanitarian and Development Use. Helsinki, Finland. https://www.unglobalpulse.org/job/technical-program-officer-data-and-ai-products-for-humanitarian-and-development-use
- The Good Ancestor Movement: Resource Mobilisation Lead. Remote UK. https://app.dover.com/apply/Good%20Ancestor%20Movement/88ef2d1a-05b9-4a26-82a4-6f1abd2c6836/?rs=76643084
- The Distributed AI Research (DAIR) Institute: Communications Lead. Remote. https://job-boards.greenhouse.io/distributedairesearchinstitute/jobs/4130897009
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Comments ()