When AI Meets End-to-End Encryption
But first:
Introducing our new Editor-in-Chief!
Audrey Hingle brings over a decade of editorial experience to The Internet Exchange. With a passion for ethical technology and expertise in content strategy, she’s eager to spotlight the stories that matter. Share your story ideas with her at editor@exchangepoint.tech.
End-of-year links you might have missed
- A U.S. federal judge has ruled that the NSO Group, the maker of Pegasus spyware, is responsible for hacking 1,400 WhatsApp accounts belonging to journalists, human rights activists, diplomats, and others. This decision sets an important precedent for holding spyware companies accountable. (Suzanne Smalley | The Record)
- The European Commission has mandated USB-C as the common charging port for electronic devices—a move that will save money, reduce e-waste, and maybe, finally, end my hunt for the “right” charger. (European Commission)
- VPN apps, including Cloudflare’s widely-used 1.1.1.1, have been pulled from India’s Apple App Store and Google Play Store following intervention from government authorities (Manish Singh | Tech Crunch)
- The Year in Math: 2024 was a groundbreaking year for mathematics, with major achievements and advancements in AI pointing to a transformative future for the field. (Jordana Cepelewicz | Quanta Magazine)
- In 2024, Latin America faced internet blockages, power outages, and disruptions linked to politics, climate change, and access to energy. (Jacobo Nájera | Global Voices)
- Podcast episode: Reforming Tech Amidst a Global Backlash Against Women's Rights. (Lucy Purdon | Tech Policy Press)
- Job Opportunity: Chief Technology Officer (CTO) to combat intergenerational poverty by helping low-income Americans overcome debt and rebuild credit at scale (Upsolve)
- ICYMI: GitHub's 2022 research on open source software in India, Kenya, Egypt, and Mexico exploring local innovations, challenges, and the growing role of individual contributors and civic tech. (GitHub)
How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent
By Mallory Knodel, Andres Fabrega, Daniella Ferrari, Jacob Leiken, Betty Li Hou, Derek Yen, Sam de Alfaro, Kyunghyun Cho, and Sunoo Park
Preprint available from Cryptology ePrint Archive, December 2024. Excerpt follows:
End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI "assistants" within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
Table of contents
- Introduction
- Background I: Encryption, AI, and Trusted Hardware
- Our Definitions and Taxonomy
- Technical Implications of Integrating AI with E2EE
- Background II: Consent, Privacy, Users, and Areas of Law
- Legal Implications of Integrating AI with E2EE
- Recommendations
- Discussion
- Conclusion
- Appendix: E2EE Features Evaluation Framework
Summary of recommendations
- Training. Using end-to-end encrypted content to train shared AI models is not compatible with E2EE.
- Processing. Processing E2EE content for AI features (such as inference or training) may be compatible with end-to-end encryption only if the following recommendations are upheld: (a) Prioritize endpoint-local processing whenever possible. (b) If processing E2EE content for non-endpoint-local models, (i) No third party can see or use any E2EE content without breaking encryption,52 and (ii) A user’s E2EE content is exclusively used to fulfill that user’s requests.
- Disclosure. Messaging providers should not make unqualified representations that they provide E2EE if the default for any conversation is that E2EE content is used (e.g., for AI inference or training) by any third party.
- Opt-in consent. AI assistant features, if offered in E2EE systems, should generally be off by default and only activated via opt-in consent. Obtaining meaningful consent is complex, and requires careful consideration including but not limited to: scope and granularity of opt-in/out, ease and clarity of opt-in/out, group consent, and management of consent over time.
Mallory Knodel et al. How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent. Cryptology ePrint Archive, Paper 2024/2086. 2024. url: https://ia.cr/2024/2086