‘Best Interests of the Child’ Paradox: The Social Media Ban Debate in the UK

The UK’s current approach, including the rollout of the Online Safety Act, is still failing to end the widespread harms facing young people online.

‘Best Interests of the Child’ Paradox: The Social Media Ban Debate in the UK
Photo by Ian Taylor / Unsplash

By Morgan Briggs

The UK finds itself at a critical juncture in addressing the ongoing harms to children and young people online. Most recently, Grok, the Musk-engineered AI chatbot within X has been used to create sexualized, non-consensual imagery, in which children and young people were digitally undressed. While Ofcom responded by demanding an explanation and soon after launching a formal investigation, the core of the issue remains: the UK’s current approach, including the rollout of the Online Safety Act, is still failing to end the widespread harms facing young people on these platforms.

Often, proposed solutions to the challenge of keeping children safe online default to blanket approaches that place a plaster on the wider issue, rather than addressing it at its source. The core problem is not that children use social media, but rather that the platforms themselves avoid taking responsibility, allowing online harms to young people to persist. Any meaningful solution must start by listening to children and young people to understand what they want from their online experiences. Yet recent events involving Grok have intensified calls from the UK Parliament to ban social media for under-16s. On January 21, 2026, the House of Lords voted 261 to 150 in favor of amending the Children’s Wellbeing and Schools Bill, requiring user-to-user services to employ age assurance measures to prevent under-16s from accessing the platforms. 

Policymakers are divided – some see the ban as the best and only path forward, while others fear it allows platforms to evade accountability while negatively impacting young people. Conservative Peer, John Nash, who put forward the amendment, indicated that the UK faced “nothing short of a societal catastrophe caused by the fact that so many of our children are addicted to social media.” Meanwhile, opposing members like Baroness Claire Fox, a non-affiliated life peer, expressed concern about the ban, warning that it would be a quick-fix solution that would create a “whole new raft of difficulties down the line.” 

While the UK Government has recently launched a 3-month consultation on children’s social media use, it remains to be seen how children and young people will be meaningfully engaged in the process. Comments from several politicians have cast doubt on whether children’s perspectives will be genuinely considered during this consultation process. For example, Kemi Badenoch, Leader of the Conservative Party, remarked, “We tell children what to do all the time. Children are not adults. Freedom is for adults,” highlighting the prevailing attitude that children’s thoughts and opinions are secondary, or should not be taken into consideration. What remains clear is that the voices of children and young people – the group most affected – frequently remain sidelined, resulting in policy decisions that are far removed from the best interests of the child.

Children’s Experiences Online

Research demonstrates that platforms have been designed to maximise duration of use, and that young people are especially susceptible to the harmful effects of digital addiction due to their stage of neural development. However, online harms extend way past those associated with prolonged screen time. Investigations by the BBC revealed that fictional social media profiles aged 13-15 were quickly recommended content about bullying, self-harm, violent abuse, and weapons. 

The rapid growth of easily scalable and accessible generative AI tools has further amplified the volume of online harms taking place. Text-to-image and video generators have resulted in the proliferation of deepfake child pornography, with 2025 being cited by Internet Watch Foundation as the “worst year on record for online child sexual abuse material.” Investigations have found that AI companies like Meta had internal policies allowing AI chatbots to “engage a child in conversations that are romantic and sensual,” and parents have come forward alleging that online AI companions encouraged suicidal thoughts and ultimately contributed to the death of their child. 

However, against the backdrop of these harms, young people use social media to find community and connection and advocate for their political opinions. Research demonstrates that young people from marginalized communities express the importance of online support systems, noting the substantial value in finding a sense of belonging. Online spaces have also been documented as providing young people with the opportunity to explore their unique interests within a supportive community.

To make decisions that genuinely respect young people’s full set of rights and best interests, we must carefully weigh both the benefits and harms of these platforms, and most crucially, listen to what young people themselves are advocating for.

Are We Getting The ‘Best Interests Of The Child’ All Wrong?

While there remains disagreement on the best ways to tackle the issue of online harms to children and young people, discussions of the United Nations Convention on the Rights of the Child (UNCRC) are often missing from these deliberations. The UNCRC is the most widely ratified human rights treaty in the world, containing 54 articles that outline the fundamental rights of every child. The UK ratified the UNCRC in 1991, meaning it has legal obligations to uphold the treaty under international law. And while the UNCRC has not been incorporated into domestic law in England, Wales, and Northern Ireland, with Scotland as the exception, its principles are referenced in domestic legislation, and the UK submits regular reports to the UN Committee on the Rights of the Child.

The principles detailed in the UNCRC are very relevant to the ongoing debate on a social media ban, but I want to focus specifically on Article 3, the best interests of the child principle, which is defined as follows: “In all actions concerning children, whether undertaken by public or private social welfare institutions, courts of law, administrate authorities or legislative bodies, the best interests of the child shall be a primary consideration.” In General Comment No. 14 on Article 3, this is expanded upon to state, “an adult’s judgment of a child’s best interests cannot override the obligation to respect all the child’s rights under the Convention.” 

However, despite this clear explanation, the phrase “best interests” is often cited when discussing paternalistic approaches to digital governance. Child rights scholars Mary Mitchell, Laura Lundy, and Louise Hill have warned against this, stating that the best interests of the child “should not be equated with protection from harm, the right to development or education, etc.: it is in a child’s best interest to enjoy all their human rights.” Unsurprisingly, assuming what the best interests of the child are without consulting them also violates UNCRC Article 12, where children and young people have the right to be heard and taken seriously in all matters affecting them.

What Do Young People Actually Want?

When asked about their thoughts on how they would like to see AI systems designed and developed moving forward, the children and young people who contributed to the Manifesto for the Future of AI made their priorities clear. In the manifesto, participants called on world leaders to “think about children’s experiences and needs around the world and put things in place to make sure AI is safe for children, including restrictions on social media.” They also urge lawmakers to “put in place new laws that make sure that AI is developed and used ethically” and “make sure that all children have the opportunity to benefit from AI.” Nowhere in these statements do the contributing children and young people suggest that the best solution is to prevent them from accessing platforms. 

Countless studies have indicated that many young people view online spaces as important to them, especially when they provide spaces for community and growing their independence. Young people have frequently said that platform developers should be held accountable and should ensure that social media is safe for children. When we fully consider the best interests of the child principle, it is imperative that young people’s full set of rights is considered in depth, including those that allow children to create community and find supportive spaces and places of belonging. 

Australia’s Approach to a Social Media Ban 

International approaches, such as Australia’s recent amendment, highlight the risks of blanket bans that penalise young people, rather than tackling the source of the issue. We are seeing this play out in real time in Australia through the Online Safety Amendment (Social Media Minimum Age) which came into force in December 2025 and introduces a mandatory minimum age of 16 for key social media platforms, including YouTube, X, Facebook, Instagram, TikTok, Snapchat, Reddit, Twitch, Threads and Kick. Australia’s eSafety Commissioner cited the need to protect young Australians from risks to their health and well-being on social media platforms. While these concerns are valid, children are the ones most negatively impacted by these policy positions. Rather than focusing on creating more empowering and safer online solutions, the current position mandates that platforms take steps to prevent under-16s from having accounts. In fact, the eSafety Commissioner has been very cautious in describing the Online Safety Amendment indicating that it should be viewed as a delay to having an account, rather than a flat-out “ban.” 

Considering all of the rights young people are entitled to, in line with the best interests principle, the Australian Human Rights Commission detailed key human rights and freedoms contained in the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social, and Cultural Rights (ICESCR) that would be affected by a blanket social media ban. These range from the freedom of expression and access to information to the right to culture, leisure, and play. The Commission indicated that the Online Safety Amendment in Australia has the potential to “significantly interfere with the rights of children and young people.”

The Cost of Ignoring the Warning

Using Australia as a case study, we should heed the warning offered by the Australian Human Rights Commission. First and foremost, members of the UK Parliament must understand that a blanket ban does not take into consideration the nuanced lived experience of young people. Once again, young people’s voices are not being centered in decisions affecting them, denying them a key right under the UNCRC. Additionally, as indicated by UNICEF, there are risks associated with a social media ban that may cause it to backfire. UNICEF’s statement released in December 2025 calls for a broader approach that does not solely rely on age restrictions, but instead prioritizes children’s rights to privacy and participation, thereby avoiding “pushing them into unregulated, less safe spaces.” It continues, “Laws introducing age restrictions are not an alternative to companies improving platform design and content moderation.”

Instead of working to create safer digital spaces, by ensuring technologies, especially those present on social media platforms, cannot produce and spread child sexual abuse material (CSAM), the solution proposed is to prevent young people from using the platforms in the first place or, in some cases, raise the age of digital consent. Any solution must address the root cause of harm and ask who the legal duty of care must fall on. Current approaches shift the responsibility away from where it belongs: social media platforms. And while holding platforms to account will not be simple, it is a worthwhile endeavor. Otherwise, a ban simply kicks the can down the road, aiming to prevent harms until the age of 16 and not addressing the core issue.

The fact of the matter is that time and time again, children are rarely, if ever, the beneficiaries of digital policies: either they remain on the platforms and face harms, or they are restricted from using them. So, we must ask ourselves, who is determining what the best interests of the child are and whether they are correct. Any consideration of the best interests of the child must incorporate a holistic view of children’s lives, while also making sincere efforts to incorporate children’s voices. Otherwise, these best interests are merely projections of what adults think. And from what we already know, many young people have expressed a desire to safely participate in digital spaces, not be excluded, while shaping their rights online.

Bans create a false sense of security, leaving the underlying harms unaddressed. Children’s use of social media platforms is not and has never been the core issue. And when we simply penalise young people for adults’ missteps, we not only negate their full set of rights but also remove opportunities for teaching them about responsible digital citizenship.

If we want real progress, the UK must send a clear message that the cost of doing business in the UK is to prioritise the well-being of children and young people and listen to their advocacy for empowerment. The choice is clear – we opt for exclusion, or we demand concrete change and accountability from the platforms perpetuating the harms.

Morgan Briggs is an AI policy expert, data scientist, and ethicist working on topics related to human rights and AI, governance, as well as the ethical considerations of data science and AI methodologies. Morgan has experience leading cross-disciplinary teams, partnering with organisations like the ICO, UNICEF, the Scottish AI Alliance, UNESCO, the Office for AI, the Council of Europe’s Committee on Artificial Intelligence, and the Global Partnership on AI. She is an author of the first-ever Human Rights, Democracy, and the Rule of Law Risk and Impact Assessment for AI Systems (HUDERIA), and is currently working on creating a curriculum for young people in the UK on generative AI and children’s rights and conducting research on real-world AI evaluation.


Register interest: Feminist approaches to encryption workshops

We re-ran our popular MozFest session: "Encryption and Feminism: Reimagining Child Safety Without Surveillance" online last Tuesday so more people could participate in the conversation. Again, it was a smash success with lots of interesting ideas featuring Chayn's Eva Blum-Dumontet, Superbloom’s Veszna Wessenauer, Courage Everywhere’s Lucy Purdon, UNICEF’s Gerda Binder, and IX's Ramma Shahid Cheema and me, Audrey Hingle.

What's next?

We want to run a series of small, interactive follow-up workshops focused on crafting narratives grounded in feminist approaches to encryption across policy, organisational change, and product and design. We'd also like to find funding support to grow this work, so if your a funder please hit us up. In the meantime, please register your interest in participating in a workshop!

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

From the Group Chat 👥 💬

This week in our Signal community, we got talking about:

The House Judiciary Committee’s Foreign Censorship Threat, Part II report lands right in the middle of a growing transatlantic tech and trade fight. It frames EU platform regulation as a coordinated campaign to pressure US companies and reshape global speech rules. For a useful legal explainer of the broader extraterritoriality clash, see this recent Cyberleagle post.

As one IX community member points out, Europeans are currently focused on digital sovereignty and EuroStack, but paying far less attention to what is happening in parallel in tariff negotiations. In DC, trade officials reportedly feel confident they are gaining leverage, with congressional investigations like this one helping to build public political pressure alongside trade talks.

Want to participate in the group chat? Become a paid member.

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

Opportunities to Get Involved

  • A CHI 2026 workshop in Barcelona is seeking position papers and statements of interest on aligning commercial incentives with ethical design to tackle deceptive design (dark patterns). The workshop aims to bring researchers and practitioners together to explore how business goals and user wellbeing can better align. Submissions close February 19. https://chi2026.darkpatternsresearchandimpact.com

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share!