Raisina dialogue; Kids safety
This year's Raisina Dialogue has just ended in Delhi. I came to speak on a panel about "trends" and focussed my remarks on updates related to the Ad Hoc Committee on Cybercrime.
Other sessions of note can be watched online:
- Will Science Secure or Upend the World Order?
- Are Emerging Technologies A Panacea for the SDGs?
- Code, Court & Constitution: Challenging Tech’s Monopoly on Influence
Encryption is good for kids, actually:
- A Parents’ Guide to Encryption from the Global Encryption Coalition https://www.globalencryption.org/parents-guide-to-encryption
- Encryption for Babies is a book! https://www.globalencryption.org/2023/09/encryption-for-babies-pop-up-beta-read/
- My CDT colleague Aliya Bhatia writes in Ms Magazine, "Restricting Access to Information Online Won’t Keep Teens Safe. It Will Only Erode Democratic Rights" https://msmagazine.com/2024/01/23/kids-safe-online-access-social-media-censorship
- A CDT report from Michel Luria, "More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online" https://cdt.org/insights/more-tools-more-control-lessons-from-young-users-on-handling-unwanted-messages-online
- Global Encryption Coalition video on Child Safety in the US https://www.youtube.com/watch?v=CrUTpl6S95I
- A recent report from Roxana Radu, "Not child’s play: protecting children’s data in humanitarian AI ecosystems" https://blogs.icrc.org/law-and-policy/2023/12/14/protecting-children-data-in-humanitarian-ai-ecosystems/
For the 2024 Raisina Files, a companion publication to the yearly Dialogue, I wrote about children's rights and encryption. The entire publication is available for download online, and this is a reprint of my article:
Children's Rights at the Centre of Digital Technology Standards by Design
In today’s networked society, there is growing anxiety among adults about the potential negative impact on our children of smartphones, social media, and our always-online culture.i However,a more nuanced understandingii of the risks and benefits of technology should acknowledge both the potential harms, and the resilience and wisdom demonstrated by the younger generation.
Quotidian parenting concerns aside, the spread of Child Sexual Abuse Material (CSAM) and the real harm that it documents, are exacerbated, wildly complicated and tragic crimesiii unique to the scale and reach of digital-age communications. It is useful to take an extreme issue such as CSAM and child safety and explain how human rights organisations approach policy and technology interventions against child abuse to ensure that there is adequate protection for children’s human rights and civil liberties—namely, privacy, security and safety, and free expression and association.
The most effective approaches to enabling children’s rights online are human rights-centric, rather than protectionist. A shared policy advocacy approach among all stakeholders working in service of strengthening children’s rights has four key aspects: protecting children’s rights in the digital age; incentivising technical architecture that helps ensure child safety; examining the role of agency in rights-protecting ways to moderate content; and deploying encryption for children’s safety.
Beyond policy, there are also proactive ways whereby technology standards can be designed with children’s rights at the centre. Technology can assist child safety in a variety of ways, but technology alone cannot do the work of protecting children’s rights. Technology that protects privacy and confidentiality in online communications ensures and protects the human rights of all—including children and youth in at-risk communities—and allows social workers and other related institutions to help survivors in a secure and private manner. Thus, technical interventions that disrupt the privacy and security features of end-to-end encryption, not only threaten the human rights of all people using that technology, including children; they are counterproductive to the aim of keeping kids safe.
A Human Rights Approach to Child Safety
It is imperative to adopt an unwavering commitment to upholding human rights as a first principle, because children’s rights, as with all rights, are most rigorously and broadly defined within the human rights framework. The fight against child abuse, as with all crimes, necessitates a delicate balance between the imperative to protect the most vulnerable members of society and the preservation of individual rights and freedoms. A human rights-centric framework emphasises the dignity and well-being of every child, acknowledging their right to privacy, safety, and freedom from exploitation. This has the benefit of safeguarding the broader digital activities of children, into a larger effort to keep all children safe from online harms while preventing unintentional collateral damage to society as a whole that may result from overreaching or intrusive measures.
Protectionist measures must not leverage violations of anti-CSAM laws and regulations as a pretext for the expansion of surveillance and control mechanisms that could encroach upon the privacy and autonomy of individuals beyond the scope of combating child abuse. Striking the right balance is paramount, ensuring that the fight against criminal content like CSAM does not inadvertently erode the very rights and freedoms it aims to protect. An overly protectionist approach poses significant risks to marginalised groups and vulnerable communities such as women at risk of domestic abuse, LGBTQI individuals, and sex workers. Excessive surveillance and stringent control mechanisms not only undermine the privacy and autonomy of these groups but also perpetuate societal stigmas and discrimination. A nuanced, rights-based approach is essential to ensure that the pursuit of safety does not come at the cost of further marginalising those who are already at the fringes of society.
To address criminal offenses like CSAM in a rights-respecting way, it is important not to criminalise any offence because a technology is used, or because the offence has a particular technical element, or because the offence involves a specific type of content such as hate speech or copyright infringement. Each of those approaches creates potential for human rights abuses because they are so easily expanded in scope and technologically impossible to restrict. Proposals that attempt to address content by introducing systemic weaknesses to the security systems we all rely on, lays fertile ground for increased cybercrime and other human rights abuses conducted via a vast and interconnected global telecommunications network.
Furthermore, as stated to the UN by several civil society organisations, “Any investigative powers should be tied to investigations of specific crimes. Any cross-border investigative framework should not cater to the lowest common denominator in terms of human rights safeguards.”iv Thus, states must be required to adhere to the highest standard, and not default to the lowest, of protection in multi-jurisdictional investigations.
When state proposals to protect children online noticeably violate human rights, they lose legitimacy, become targets for wide criticism from the human rights community, and are weakened in the eyes of the public. All indications point to child safety becoming the defining example from the current decade of a classic dynamic: state abuse of investigatory powers has a weakening effect in the medium term on the global rules-based order as well as the unsustainability of unaccountable power in the long term. By adopting a human rights-centric perspective, we can construct a framework that not only effectively keeps children safe but also fosters a digital environment where children can explore, learn, and communicate without the shadow of unwarranted intrusion or undue restrictions.
Content Moderation, in Moderation
‘Content moderation’ refers to the set of policies, systems, and tools that intermediaries of user-generated content use to decide what user-generated content or accounts to publish, remove, or otherwise manage. As with any technology, it should serve the needs of its users. Therefore, when considering moderating content in any system, end-to-end encrypted or otherwise, any method should: 1) refrain from violating privacy and confidentiality; and 2) empower users by improving user-agent features of those systems.
An end-to-end encrypted communications system is defined by the probability of an adversary's success in learning information about the communication between “ends” or users. Users today demand systems that are both secure and private;v systems that are confidential and that limit account metadata.
In a 2021 report, the Center for Democracy and Technology evaluated five content moderation techniquesvi used in both end-to-end encryption and plaintext systems against the promises of confidentiality and privacy. These techniques include user reporting, traceability, metadata analysis and two client-side scanning (CSS) techniques that use artificial intelligence (AI): perceptual hashing and predictive models. Of these five techniques, the analysis showed, metadata analysis and user-reporting provide effective tools in detecting significant amounts and different types of problematic content on end-to-end encrypted services including abusive and harassing messages, spam, misinformation and disinformation, as well as CSAM.
However, not all approaches to content moderation using metadata are suitable. Metadata is data about data (such as sender, recipient, time stamp, and message size). In most jurisdictions the creation of more metadata contravenes data protection regulations. Encrypted systems deliver on promises of privacy and confidentiality by reducing discoverable account information—one form of metadata. Platforms take steps to minimise metadata such as user obfuscating IP addresses, reducing non-routing metadata, and avoiding extraneous message headers can enhance the confidentiality and security features of direct communications systems. Furthermore, limiting metadata analysis to the user’s device reduces the risk of exposure.
Because metadata can be correlated with other data and itself constitutes important information, proposals that leverage traceability in end-to-end encryption systems actually produce more metadata in such systems, and thus expose all people to greater risk. For instance, AI machine learning approaches to metadata-driven content moderation risk exposing and re-aggregating identifiable information about people to third-party large language models.
Any measure to introduce computing on the user agent, or “client”, or user device (such as client-side scanning) will not only break encryption but becomes a direct threat to civil liberties the moment a person’s device becomes their adversary. Moreover, how client-side scanning is envisioned to work is notable: There is a turn to novel computational methods, AI machine learning (AI/ML), in the industry as well as governments in the hopes that hard societal problems can be solved with advances in technology. When Apple announced in 2021 changes to messaging and photo services,human rights advocates said,vii “Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the U.S., but around the world.”
Many such attempts come from an over-glorification of AI/ML techniques for content moderation, all the while risking exposure of their users to “classic” or unsophisticated hacking by bad actors.Client-side scanning (CSS) has been flagged and debunked by academicsviii who argue that “CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic.” Public interest technologists,ix and industry alike,x will continue to research and expose the serious risks associated with deployment of such methods, backed by expert analyses, and warn against a blind faith in technological solutionism, no matter how cutting-edge the technology in question seems.
There is a need for transparent and inclusive discussions between policymakers, technology experts, and civil society to navigate the complex landscape of online harms to children as it is a problem that intersects policing, protocols, platforms, and user privacy. A collaborative and informed approach balances the imperative to combat illegal content with the paramount importance of preserving fundamental rights and the security of communications in an always-online world.
Indeed,a recent report on youth’s experiences onlinexi shows that they themselves would benefit from more control over their tools, supporting the claim that user reporting and other in-app features are more effective and privacy-preserving solutions to content moderation than automated content moderation by the platform.
Tech Can Assist, But Not Control, Children’s Safety
Tech-assisted approaches recognise that the problems of the digital age are much larger than that of sharing CSAM on encrypted platforms. Human rights advocates urge policymakers and platforms alike to go beyond “backdoors” to encryption, and rather take a wider view, as the previous discussion on the principles of a human rights approach has suggested.
In parallel to the human rights complexities of the problem of CSAM, the challenges surrounding the implementation of technology that attempts to solve CSAM are many,xiiyet all of them are insufficient both in guaranteeing less CSAM and preserving privacy and confidentiality. Systems that involve content detection inherently involve some level of access to content created and shared by users, thus violating the promises of end-to-end encryption.xiii However, a nuanced understanding of encryption technologies provides the opportunity for a balanced approach to the larger problem space that prioritises user privacy and security while addressing the challenges associated with illegal content.
Scholar Laura Draper takes an approach that accepts the ubiquitous existence of strong encryption,xiv and concludes that its security, privacy and confidentiality features are helpful to victims of abuse. Draper builds an informed and evolved set of recommendations for how to combat online child exploitation and abuse, including preserving strong end-to-end encryption.
For a variety of purposes, governments are focused on how to detect illegal content in private communications, and the technical approaches they suggest are often flawed. For instance,a draft European Commission reportxv leaked in September 2020 proposed several detection methods that would each break end-to-end encryption, weaken the security and privacy of all users, and present attractive targets for criminals. Again, experts in both human rights and technology responded by breaking the myths that framed the report,xvi arguing that “breaking end-to-end encryption to curb objectionable content online is like trying to solve one problem by creating 1,000 more. Insecure communications make users more vulnerable to the crimes we collectively are trying to prevent.”
Cybersecurity experts are in agreement: There is no way to enable a third party to monitor end-to-end encrypted communications without weakening the security and privacy for all of its users, including those most vulnerable and the victims of crimes for whom digital security is especially critical.
Backdoors to encryption render the whole system vulnerable, weaken the security of all components, and put users at risk. If measures such as mandatory detection, reporting, and removal are intended to apply to end-to-end encrypted communications, then regardless of whether the unlawful content is known, platforms would be forced to undermine end-to-end encryption, and to do so for all content, the vast majority of which is lawful.
By minimising the intense focus on end-to-end encrypted systems that would require content detection, there is an opportunity to build alternative methods to combat illegal content without compromising the privacy and security of people, including children and youth, online. In that search for solutions, a respect for user confidentiality and privacy is a must while addressing the challenges associated with illicit content detection and beyond.
Technical interventions like user reporting and metadata analysis are more likely to be implemented consistently across the industry and better preserve privacy and security guarantees for end users. A narrow focus on these improvements could address the problem of CSAM while avoiding the privacy and security nightmares of broader, technocractic approaches. These tools can detect significant amounts of different types of problematic content on end-to-end encrypted services, including abusive and harassing messages, spam, misinformation and disinformation, and CSAM. These tools have known imperfections—including that users sometimes make false accusations via provided reporting mechanisms—thereby necessitating more research to improve these tools and better measure their effectiveness.
Child Safety Begins With Privacy
The complexities of online safety for children and youth stem from the novel and pervasive risks at-scale. While research indicates a lack of clear cause-and-effect understanding of the internet's impact on youth, broadly, this article has focused on the most egregious, albeit less understood crimes against child victims of abuse. The controversies and potential drawbacks of policies aimed at devastating but relatively rare crimes, include fears of censorship and restrictions on information access for all people, including children and youth. Meaningful policy enhancements to online safety for all children would require stronger privacy legislation and clearer content guidelines, and market regulation that would force better practices and end user features in social media.xvii
Turns out that encryption protects all human rights including those of children, especially youth in at-risk communities. Encryption has a vital role in safeguarding the privacy and safety of survivorsxviii of domestic violence, sexual violence, stalking, and trafficking. It explains that secure communication and storage tools are crucial for survivors and those supporting them, with strong encryption being a critical component of the solution. Encryption mitigates technology-facilitated abuse, such as aiding safety planning and evidence protection. This is critically important to the problem of child abuse where statistically it is the caretaking adults who are most likely to be their victimisers. Encryption prevents unauthorised access to data, both in transit and at rest, which can safeguard survivors’ case files from privacy breaches and revictimisation.
Encryption’s properties of data integrity can help in maintaining strong evidence when victims or parties to the crime are collaborating with law enforcement and legal professionals. Ultimately, weakening strong encryption practices could compromise the privacy and safety of survivors. Strong encryption can empower survivors with secure communication tools that are crucial for their ability to seek help, safety, and healing.
Moreover, children’s safety and those providing services to child victims can sometimes find themselves in an adversarial relation to state power. Repressive regimes, military occupation, and migration are all examples of when children are vulnerable to state power and its abuses or excesses.xix
Companies have made commitments to user privacy and communications security, including visible changes that make children and teens safer online, including the use of encryption.xx When companies, either because they are pressured by law enforcement to do more for children, or coerced through legislative restrictions, open the door to privacy threats for all users, regardless of age or jurisdiction, they create new threats for those same young people targeted by the changes. Youth and children in abusive homes are especially vulnerable to injury and reprisals, including from their parents or guardians.
Overall, child safety enhancements must involve families, schools, and young people themselves in creating effective strategies, with states responsible for tailoring those strategies appropriately per jurisdiction and cultural environs. The importance of digital literacy education must also not be forgotten, which aligns child safety with efforts to bolster sustainable development and civil rights.
Conclusion
A principled approach to online child safety uncovers some key takeaways for the debates in parliaments, board rooms and around dinner tables:
- There is no silver-bullet solution to the complex problem of child exploitation. Data-driven methods in computer vision and data analysis at scale are largely overstated, processing intensive, and require human oversight. False confidence in these tools does a disservice to youth victims as much as it makes a collateral damage of all youth’s privacy.
- Law enforcement and the intelligence communities in rule-of-law democracies have consistently demonstrated a lack of restraint in the use of pervasive monitoring tools. While policy must continue to hold investigatory powers in check, the ubiquitous deployment of strong encryption is another necessary check on this power.
- Children’s threat models are complex because they lack legal standing, they are dependents, and they need to be cared for. They deserve safety from abusive parents, strangers and familiar adults who would hurt them, companies that might exploit them, and states who would neglect them. Technical mechanisms that give children and youth more agency over their digital lives give them the tools to address the threats they might be facing.
On the whole, the safety of already marginalised populations, especially children and youth in at-risk communities, could be seriously endangered in the absence of end-to-end encryption environments. A more holistic and child-centric approach is needed both from policy and technology.
Notes
i Arman Khan, “The Validation Spiral: How We Let Others Define Our Self-Worth,” Feminism in India, June 2021,https://feminisminindia.com/2021/06/18/the-validation-spiral-how-we-let-others-define-our-self-worth/
ii Kelli María Korducki, “What the Teen-Smartphone Panic Says About Adults,” The Atlantic, June 14, 2023,https://www.theatlantic.com/newsletters/archive/2023/06/teen-smartphone-social-media-adults/674417/.
iii Susan Landau, “Finally Some Clear Thinking on Child Sexual Abuse and Exploitation Investigation and Intervention,” Lawfare, January 24, 2023,https://www.lawfaremedia.org/article/finally-some-clear-thinking-on-child-sexual-abuse-and-exploitation-investigation-and-intervention.
iv Joint civil society statement, “Privacy & Human Rights in Cross-Border Law Enforcement,” August 2021, https://www.eff.org/files/2021/08/17/20210816-2ndaddprotocol-pace-ver2-final.pdf
v Mallory Knodel et al., “Definition of End-to-End Encryption,” Internet Draft, Internet Engineering Task Force, June 21, 2023,https://datatracker.ietf.org/doc/draft-knodel-e2ee-definition/.
vi Seny Kamara et al., Outside Looking In: Approaches to Content Moderation in End-to-End Encrypted Systems, Center for Democracy & Technology, 2021,https://cdt.org/wp-content/uploads/2021/08/CDT-Outside-Looking-In-Approaches-to-Content-Moderation-in-End-to-End-Encrypted-Systems.pdf.
vii Greg Nojeim et al., “Apple’s Changes to Messaging and Photo Services Threaten User Security and Privacy,” Center for Democracy & Technology, August 2021,https://cdt.org/press/cdt-apples-changes-to-messaging-and-photo-services-threaten-users-security-and-privacy/.
viii Hal Abelson et al., “Bugs in Our Pockets: The Risks of Client-Side Scanning,” arXiv, October 14, 2021,https://arxiv.org/abs/2110.07450.
ix Gurshabad Grover et al., “The State of Secure Messaging,” The Centre for Internet and Society, July 2020,https://cis-india.org/internet-governance/blog/the-state-of-secure-messaging
x “EDPB-EDPS Joint Opinion 04/2022 on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Rules to Prevent and Combat Child Sexual Abuse,” European Data Protection Board, July 28, 2022,https://edpb.europa.eu/our-work-tools/our-documents/edpbedps-joint-opinion/edpb-edps-joint-opinion-042022-proposal_en.
xi Michal Luria, “More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online,” CDT, November 9, 2023,https://cdt.org/insights/more-tools-more-control-lessons-from-young-users-on-handling-unwanted-messages-online/.
xii Mallory Knodel, “To the UK: An Encrypted System That Detects Content Isn’t End-to-End Encrypted,” May 25, 2022,https://cdt.org/insights/to-the-uk-an-encrypted-system-that-detects-content-isnt-end-to-end-encrypted/.
xiii Anushka Jain et al., “The draft Indian Telecommunication Bill retains its colonial roots,” Internet Freedom Foundation, September 2022,https://internetfreedom.in/the-draft-indian-telecommunication-bill/
xiv Laura Draper, “Protecting Children in the Age of End-to-End Encryption,” Joint PIJIP/TLS Research Paper Series 80, 2022,https://digitalcommons.wcl.american.edu/research/80.
xv Internet Society and Center for Democracy & Technology, “Breaking Encryption Myths: What the European Commission’s Leaked Report Got Wrong About Online Security,” Global Encryption Coalition, 2020,https://www.globalencryption.org/wp-content/uploads/2020/11/2020-Breaking-Encryption-Myths.pdf.
xvi Internet Society and Center for Democracy & Technology, “Breaking Encryption Myths”
xvii Lauren Leffer, “Here’s How to Actually Keep Kids and Teens Safe Online,” Scientific American, September 18, 2023,https://www.scientificamerican.com/article/heres-how-to-actually-keep-kids-and-teens-safe-online/.
xviii Internet Society and National Network to End Domestic Violence, “Understanding Encryption: The Connections to Survivor Safety,” 2021,https://www.internetsociety.org/wp-content/uploads/2021/05/NNEDV_Survivor_FactSheet-EN.pdf.
xix Dag Tannenberg, “Political Repression,” in The Handbook of Political, Social, and Economic Transformation, ed. Wolfgang Merkel, Raj Kollmorgen, and Hans-Jürgen Wagener (Oxford, 2019),https://doi.org/10.1093/oso/9780198829911.003.0066.
xx Loredana Crisan, “Launching Default End-to-End Encryption on Messenger,” Meta, December 6, 2023,https://about.fb.com/news/2023/12/default-end-to-end-encryption-on-messenger/.