When AI Meets End-to-End Encryption

When AI Meets End-to-End Encryption
Photo by Birmingham Museums Trust / Unsplash

But first:

Introducing our new Editor-in-Chief!

Audrey Hingle brings over a decade of editorial experience to The Internet Exchange. With a passion for ethical technology and expertise in content strategy, she’s eager to spotlight the stories that matter. Share your story ideas with her at editor@exchangepoint.tech.


By Mallory Knodel, Andres Fabrega, Daniella Ferrari, Jacob Leiken, Betty Li Hou, Derek Yen, Sam de Alfaro, Kyunghyun Cho, and Sunoo Park

Preprint available from Cryptology ePrint Archive, December 2024. Excerpt follows:

End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.

This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI "assistants" within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.

Table of contents

  1. Introduction
  2. Background I: Encryption, AI, and Trusted Hardware
  3. Our Definitions and Taxonomy
  4. Technical Implications of Integrating AI with E2EE
  5. Background II: Consent, Privacy, Users, and Areas of Law
  6. Legal Implications of Integrating AI with E2EE
  7. Recommendations
  8. Discussion
  9. Conclusion
  10. Appendix: E2EE Features Evaluation Framework

Summary of recommendations

  1. Training. Using end-to-end encrypted content to train shared AI models is not compatible with E2EE.
  2. Processing. Processing E2EE content for AI features (such as inference or training) may be compatible with end-to-end encryption only if the following recommendations are upheld: (a) Prioritize endpoint-local processing whenever possible. (b) If processing E2EE content for non-endpoint-local models, (i) No third party can see or use any E2EE content without breaking encryption,52 and (ii) A user’s E2EE content is exclusively used to fulfill that user’s requests.
  3. Disclosure. Messaging providers should not make unqualified representations that they provide E2EE if the default for any conversation is that E2EE content is used (e.g., for AI inference or training) by any third party.
  4. Opt-in consent. AI assistant features, if offered in E2EE systems, should generally be off by default and only activated via opt-in consent. Obtaining meaningful consent is complex, and requires careful consideration including but not limited to: scope and granularity of opt-in/out, ease and clarity of opt-in/out, group consent, and management of consent over time.

Mallory Knodel et al. How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent. Cryptology ePrint Archive, Paper 2024/2086. 2024. url: https://ia.cr/2024/2086

💡
Please forward and share!

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe