Confidential Chats in the Age of AI
Confer is one early example of how to build a private AI chatbot. We should have many more says Mallory Knodel.
As conversational AI moves into messaging apps and everyday workflows, the question is not whether these systems will handle sensitive information, but whether they are built to protect it. Governments, companies and the public gathered in New Delhi for the second AI Impact Summit late last month, and not a panel went by without an expert addressing safety in AI. Indeed data is a design asset and data privacy a design liability. Some emerging projects, including tools like Confer aim to bring stronger confidentiality to AI chats.
In order to confront the paradox of privacy in a system built on data, I and my colleagues at NYU and Cornell released a paper “How to think about End-to-End Encryption and AI: Training, Processing, Disclosure, and Consent.” E2EE messaging systems achieve the greatest confidentiality and privacy. What our paper recommends on the training and processing of data in E2EE environments, and the analysis that supports those recommendations, provides a solid foundation upon which to attempt to resolve the paradoxes and challenges of privacy in an age of AI.
E2EE guarantees that only the sender and intended recipients can read message contents. AI systems, by contrast, require access to plaintext data to function during inference (processing user queries) and companies might be tempted to use novel conversation data for training. We wanted to let users know about these risks, but we also wanted to influence the way these features are being designed. Our recommendations are clear. And so far we think most platforms have taken heed.
Recommendations
- Training. Using E2EE content to train shared AI models is incompatible with E2EE. Models can memorize and reproduce training data, and even privacy-enhancing techniques like differential privacy do not meet the strict confidentiality guarantees users expect from encrypted messaging.
- Processing. AI features may be compatible with E2EE only if endpoint-local processing is prioritized. If processing must occur off device, no third party can see or use E2EE content, and a user’s encrypted data must be used exclusively to fulfill that user’s request. Not to improve global models.
- Disclosure. Providers should not make unqualified claims that a service is “end-to-end encrypted” if encrypted content is, by default, accessible to third parties for AI processing.
- Consent. AI features in E2EE systems should be off by default and activated only through meaningful, granular opt-in.
Enjoying this article? Consider becoming a paid subscriber.
If you find our work to source, research, write, and edit the IX valuable, consider becoming a paid subscriber. You’ll get access to our members-only Signal community and be able to leave comments and replies to posts.
For a limited time, we’re offering 50% off an annual subscription. Right now you can subscribe for just $25.
Not ready to commit? You can always leave us a tip!
These are not hypothetical concerns. Consider WhatsApp’s integration of Meta AI: Invoking Meta AI through a search bar or in a standalone chat may fall outside E2EE contexts, but tagging “@MetaAI” inside a private group chat might effectively bring a third party into what users think is a private, encrypted conversation. The interface still looks secure, even though the underlying privacy guarantee has changed. The details for how this is integrated matters, as our paper points out.
Apple’s approach is different but raises related issues. Apple Intelligence relies on what it calls “Private Cloud Compute,” which uses trusted execution environments (TEEs) in Apple’s data centres. As cryptographer Matt Green has noted, TEEs are a meaningful improvement over sending data to the cloud in plain text, but they are not the same as end-to-end encryption. The cloud server – however secure – still acts as an endpoint that can access decrypted data. This may be reasonable for some AI features, but it does not offer the strict confidentiality that E2EE is meant to provide.
That distinction matters because encryption is not merely a feature, it is a security guarantee. Treating a cloud server as an “endpoint” weakens that guarantee, even if the technology behind it is sophisticated.
Which brings us to something genuinely new
Last month, Moxie Marlinspike introduced Confer, an early-stage project described as “end-to-end encryption for AI chats.” Confer encrypts conversations so that only the user can access them. The service cannot read, train on, or hand over user chats because it does not possess the keys.
Confer’s framing is different from ours. Our paper is about maintaining E2EE guarantees in systems where encryption already exists by default. Confer focuses on adding stronger confidentiality to AI chats that are typically not encrypted at all. It relies in part on TEEs and treats a server-side enclave as part of its end-to-end model – a definition we would not adopt.
If Confer were integrated directly into an E2EE messaging app, it could alter the baseline security guarantees of that app. But as a standalone AI chat system, the comparison point is not E2EE messaging; it is today’s standard AI chatbot, where conversations live in corporate data lakes.
Why Confer matters
Today’s AI assistants invite something new: not just queries, but confessions. The conversational interface encourages us to elaborate, to think out loud, to reveal uncertainty and context. We are shown what looks like a private dialogue when in reality, it is a group chat with corporate operators, service providers, future advertisers, and potentially even governments.
Confer attempts to make the interface match the underlying technology. If it looks like a private conversation, it should be a private conversation.
Is it perfect? No. Is it equivalent to strict E2EE messaging? Also no. But it represents a meaningful step toward private AI systems architected from the start to minimize corporate access to user thought.
Privacy-respecting AI is not what we are seeing today. Instead, we are watching the normalization of off-device inference, broad data retention, and training on user inputs by default. If we want AI agents and assistants that work for people rather than platforms, privacy cannot be an afterthought layered onto an extraction model. It must shape the product from the beginning.
E2EE with AI, implemented according to our recommendations, remains the highest bar for confidential communications. But not every AI use case requires that bar. What we DO require are systems that are honest about their guarantees and ambitious about protecting user data.
Confer is one early example of how to build a private AI chatbot. We should have many more.
Human Rights at IETF 25
In two days, Shenzhen will host the IETF community for a week of technical debate on the protocols shaping the internet’s future. IX’s Mallory Knodel will there hosting the Human Rights Protocol Considerations (HRPC) research group, presenting new work on end-to-end encryption and AI to the Crypto Forum Research Group (CFRG), and sharing progress on E2EE interoperability with the Decentralised Internet Research Group (DINRG). See the agenda here.
Internet Society has kindly made a list of sessions and working groups at IETF 25 that may be of interest to people following internet governance, rights, and policy-relevant technical debates.
This Week's Links
Internet Governance
- W3C is pleased to announce the approval of the renewed charter for the Verifiable Credentials Working Group. https://www.w3.org/2026/03/vc-wg-charter.html
- A proposed IETF standard would let websites signal whether their content can be used for AI training or traditional search. https://datatracker.ietf.org/doc/draft-ietf-aipref-vocab/
- Private messaging platforms such as WhatsApp and Telegram are becoming a regulatory blind spot for information integrity, prompting a new international report that urges governments to regulate specific platform features, clarify what counts as public versus private communication, and protect encryption while tackling disinformation risks. IX’s Mallory Knodel helped inform this report with a presentation of How To Think About End-To-End Encryption and AI. https://informationdemocracy.org/2026/03/10/addressing-the-regulatory-blindspot-recommendations-to-promote-information-integrity-on-private-messaging-platforms
- India’s new synthetic media rules, which introduce labelling, detection, and rapid takedown requirements for AI-generated content, risk being built on flawed enforcement tools and weak safeguards says WITNESS. https://www.witness.org/indias-synthetic-media-rules-build-enforcement-on-the-wrong-foundation
- The UK government is seeking sweeping new powers to rapidly update online safety laws without full parliamentary scrutiny, arguing faster regulation is needed to tackle AI-driven harms while critics warn of executive overreach and risks to civil liberties. https://www.politico.eu/article/uk-eyes-sweeping-powers-to-regulate-tech-without-parliamentary-scrutiny
- Researchers at the Knight-Georgetown Institute say the UK competition regulator’s proposed rules for Google Search risk failing to curb the company’s entrenched market power because they do not tackle its control over default distribution. https://kgi.georgetown.edu/research-and-commentary/a-missed-opportunity-to-address-googles-market-power-in-search-in-the-uk
Digital Rights
- The Knight First Amendment Institute at Columbia University and Protect Democracy today filed a lawsuit in federal court on behalf of the Coalition for Independent Technology Research (CITR) challenging Trump policy that threatens deportation for work on social media platforms and online harms. https://knightcolumbia.org/content/technology-researchers-challenge-trump-policy-threatening-deportation-for-work-on-social-media-platforms-and-online-harms
- Australia’s privacy regulator says a tribunal ruling on Bunnings’ facial recognition use confirms retailers can deploy the technology in limited circumstances to address serious crime, but warns the legal bar remains high and compliance failures still matter. https://www.oaic.gov.au/news/media-centre/privacy-commissioner-statement-on-administrative-review-tribunals-bunnings-decision
- IEEE Standards Assocation shares key trends shaping online age verification in 2026, including biometric age estimation, privacy-preserving technologies, and standards-based approaches. https://standards.ieee.org/beyond-standards/trends-in-online-age-verification-for-2026
- After Kansas invalidated many transgender residents’ driver’s licences overnight, experts warn that new online age-verification laws requiring digital ID checks could force trans people to out themselves to use the internet, extending offline discrimination into digital spaces. https://www.theverge.com/policy/892075/age-verification-kansas-id-trans
- People whose IDs became invalid will also need a new, compliant ID to vote under Kansas' voter ID law. https://www.axios.com/local/kansas-city/2026/02/26/kansas-law-invalidates-transgender-ids
- Amid repeated nationwide internet shutdowns in Iran, Iranian Women’s Coalition for Internet Freedom has issued a manifesto calling for global recognition of internet access as a human right and urging the creation of a decentralised, open, and rights-respecting global network. https://docs.google.com/document/u/0/d/1qyFKC6tC5fCwNBJKUSlz0gOT5SuzanOaVBMflNci3Zw/mobilebasic
- Citizen Lab researchers say forensic evidence shows Kenyan police used phone-extraction tools made by surveillance firm Cellebrite on the device of activist and opposition politician Boniface Mwangi after his arrest during protests in 2025. https://citizenlab.ca/research/cellebrite-used-on-kenyan-activist-and-politician-boniface-mwangi
Technology for Society
- A new study suggests that algorithms can influence attitudes toward political outgroups. https://www.prosocialdesign.org/blog/do-social-media-feeds-fuel-political-division
- As military power re-emerges as a central driver of international economic relations, a new paper argues that it plays a key role in sustaining global inequality through an “imperialist feedback loop” in which technological and financial dominance enables military dominance, which in turn preserves the global economic hierarchy. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6072166
- Iranian drone strikes on Amazon Web Services data centres in the Gulf have exposed the growing risk that critical digital infrastructure could become a frontline target in modern warfare. https://restofworld.org/2026/iran-amazon-data-center-strikes
- Global Majority countries risk remaining stuck at the bottom of the AI value chain unless they use their leverage over critical minerals to shape AI governance, argues Chinasa T. Okolo in Science. https://www.science.org/doi/10.1126/science.aef6678
- In honor of International Women’s Day, EFF asked five women at their org about women in digital rights, freedom of expression, technology, and tech activism who have inspired them. https://www.eff.org/deeplinks/2026/03/admiring-our-heroes-international-womens-day-five-women-tech-eff-admires-0
- AI brings intelligence but not wisdom, say Iain Levine and Dunstan Allison-Hope, who argue that Anthropic needs a human rights policy. https://www.linkedin.com/pulse/ai-brings-intelligence-wisdom-why-anthropic-needs-human-iain-levine-lx56e
- Our online lives have been gentrified and cleaned up in a way that looks really similar to the offline world says Jessa Lingel on NPR’s Code Switch. https://www.npr.org/2026/03/04/nx-s1-5720754/how-the-internet-got-gentrified
Privacy and Security
- A new IETF protocol designed to encrypt the Server Name Indication field aims to close one of the web’s final privacy loopholes, potentially limiting the ability of states and network providers to track or censor users’ browsing writes IETFexpert/visiting fellow Nick Sullivan for CDT. https://cdt.org/insights/encrypted-client-hello-closing-the-sni-metadata-gap
- Privacy International (PI) and Women on Web (WoW) surveyed sexual and reproductive justice activists about privacy and surveillance risks in digital spaces and produced a practical guide to help them protect their data and secure their devices. https://privacyinternational.org/long-read/5742/privacy-international-women-web-securing-reproductive-justice-guide-digital-privacy
- Apple’s reported $2 billion acquisition of Israeli surveillance startup Q.ai has sparked criticism from rights advocates, who argue the deal risks bringing military-linked “silent speech” technologies into consumer devices and raises serious human-rights and accountability concerns. https://skylineforhuman.org/en/news/details/897/apples-2-billion-payout-israeli-firm-gaza-genocide
- Biometric and digital ID systems are rapidly spreading across Africa, despite concerns they risk enabling surveillance, exclusion, and long-term technological dependence on foreign technology providers, particularly firms from Europe and China. https://techafricanews.com/2026/02/16/49-african-nations-adopt-digital-id-technology-amid-surveillance-risks
Upcoming Events
- Hacks/Hackers AI x Journalism Day. March 16, Austin, TX. https://www.hackshackers.com/hacks-hackers-ai-x-journalism-day-in-austin-march-16
- Algorithmic Bias, Gender Justice, and Descent-Based Discrimination: Ensuring AI Works for All Women and Girls. This session delves into AI’s influence on women’s and girls’ pathways to justice, emphasizing compounded vulnerabilities from overlapping discriminations. March 18. https://globalforumcdwd.org/event/algorithmic-bias-gender-justice-and-descent-based-discrimination-ensuring-ai-works-for-all-women-and-girls
- Folk Tech Connection Call - March 2026 - with Eriol Fox talking about accessibility and inclusive and usable design in Open Source, as well as space for shareouts on projects, and time to connect. March 18. https://luma.com/bd14zs3s?tk=kbaHVi
- Book Talk: Design for Privacy by Robert Stribley. Are your designs protecting—or exposing—your users? In Design for Privacy (published by Rosenfeld Media), you’ll uncover how shifting technologies threaten personal data and what that means for your work. March 25. New York, NY. https://www.eventbrite.com/e/book-talk-design-for-privacy-by-robert-stribley-tickets-1984635380849
- PDN Pro-Social. Building a Prosocial Platform for Local Communities, Roundabout with New_ Public. March 26. Online. https://luma.com/5vzxn9vn
- All Tech Is Human is holding an all-day workshop in Manhattan on building an inter-party trust framework for AI. March 26, New York, NY. https://docs.google.com/forms/d/e/1FAIpQLSeidCs4-ifuG4iZxi5kh3QBAGuh_UQBksPIzhoTzabTDblaHg/viewform
- ATmosphereConf is THE global AT Protocol community conference. March 26-29, Vancouver, Canada and Online. https://atmosphereconf.org
- "Can Middleware Save Social Media?" Middleware are third-party tools that sit between users and platforms that proponents say give people control over what they see and share online. Skeptics warn they raise privacy concerns and that more fundamental changes are needed. This event asks: Can middleware really deliver the improvements that its boosters envision? March 27. https://knightcolumbia.org/events/can-middleware-save-social-media
- Palestine Digital Activism Forum (PDAF) 2026 hosted by 7amleh – The Arab Center for the Advancement of Social Media. An amazing speaker lineup! March 30-31, Online. https://events.ringcentral.com/events/pdaf-2026
- Mobilize supporters & build campaign momentum. A one-day online accelerator for NGOs stuck on how to mobilise people, build momentum, and turn ideas into action. April 15, Online. https://www.resource-alliance.org/event/creative-organising-lab
- Take Back Tech 3 is a gathering for organizers, artists, tech workers, academics, lawyers, and more to rally together and strategize our next power-building moves. April 17-19, Atlanta, GA. https://www.takebacktech.com
Careers and Funding Opportunities
- 7amleh is inviting proposals from communications agencies and qualified freelancers in Europe to design and implement a multilingual digital campaign targeting youth audiences in Germany, France, Belgium, and Italy. https://7amleh.org/post/call-for-organizing-digital-campaign
- OTF is soliciting proposals from Information Security professionals and organizations to provide services to our Security Lab. Apply by March 25. https://www.opentech.fund/news/request-for-proposals-security-lab
- Human Rights Watch (“HRW”): Director, Information Security. Multiple Locations Considered. https://job-boards.greenhouse.io/humanrightswatch/jobs/8452926002?gh_src=865v7rzf2us
- Internet Society: Head of Global Advocacy and Internet Policy. Remote. https://internetsociety.bamboohr.com/careers/328
- NIST: Physical Scientist (International Standards Coordinator). Gaithersburg, MD. https://www.usajobs.gov/job/859190300
- ACLU: Director of Engineering, Data. New York, NY. https://job-boards.greenhouse.io/aclu/jobs/8417408002#application
- Institute for Law & AI (LawAI): Research Scholars. Cambridge, UK and Remote. https://aimpactful.com/the-institute-for-law-ai-lawai-opens-applications-for-senior-research-scholars
Opportunities to Get Involved
- Applications are now open for the second US cohort of Snap’s Council for Digital Well-Being (CDWB), a year-long program designed to elevate young people’s voices on online safety and digital well-being. https://values.snap.com/news/applications-us-council-for-digital-well-being
- Call for Nominations: Network Leadership Grants. In this call for nominations, the Green Screen Catalyst Fund is looking to support individuals, networks, and organizations who are catalysing action on environmental justice issues across the AI supply chain. Nominations due March 15. https://greenscreen.network/en/catalyst-fund
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Comments ()