See No Evil, Speak No Evil, Hear No Evil
A look at the facts surrounding the debate on social media bans.
By Elisa Lindinger, co-published with SUPERRR.
“Personality deficits and problems in social behavior” – this is what Chancellor Merz claims is caused by excessive social media use among children. The SPD (Social Democratic Party of Germany) proposes comprehensive technical measures that, according to Justice Minister Hubig, will allow children to grow up “without cyberbullying, constant comparison, or beauty ideals” – with the implicit assumption that these problems are unique to the internet. Both politicians agree that a social media ban for children under 14 (or 16) would make many things better. Let’s take a detailed look at the assumptions, facts, and contexts behind the ban debate.
The Thesis: Children are Suffering
Studies clearly show that, on average, children’s mental health is worse today than before the pandemic. The authors of the study cite a state of “permanent crisis” as the reason for this, caused by factors such as the climate crisis, social isolation due to school closures and curfews, pressure to perform at school, constant exposure to news about wars and more. The cost of living crisis, and subsequent lack of financial resources available to parents also have a major impact on children's health.
Many structural causes contribute to the fact that, statistically speaking, children today are more likely to suffer from mental illness: financial poverty (almost 15% of children in Germany and 21% of children in the UK grow up in relative poverty), discrimination and inequality, global instability and a growing sense of powerlessness. It’s no wonder they’re struggling.
In addition, while previous generations were often able to move around independently in public spaces even at a young age, the lives of children today are heavily structured, controlled, and regulated by adults. Their range of movement has decreased dramatically in recent decades, and spaces where they can act independently and without supervision are becoming rare.
The Supposed Cause: Online Social Networks
Under these circumstances, it’s hardly surprising that social media has become an important alternative space for children to cultivate friendships and find information and inspiration. And once there, they encounter sensationalist content as well as bad news from around the world. And the ranking algorithms of the big tech platforms give preference to such content because it increases the amount of time users spend on the platforms. More time, more advertising, more profit. Despite that, some of the information they see there is real, and important. Denying children access to information – especially information that will shape their future – is like putting a bandage over their eyes so they can't see the wound on their hand.
The scientific basis for how exactly social media use affects the psyche is thin, and evidence-based policy has long since given way to the perceived truths of public debate. Popular nonfiction books, most notably Jonathan Haidt's “The Anxious Generation” portray social media use as the cause of the mental health crisis of an entire generation. Studies, even more studies, and attempts by scientists to explain that this exaggerated diagnosis is neither globally valid nor can the causal role of social media be proven, unfortunately receive significantly less attention.
Children themselves want contact with friends, digital participation, and platforms that work better for them. The majority of children are against a ban; the older they get, the more they want to set the rules for how they use social media together with their parents. Fear and helplessness seem to lie primarily with the adults – perhaps it’s parents, not children, who are the real “Anxious Generation”?
But all these points are quickly ignored in discussions about social media bans: children's right to participation and support; their desire to discuss and negotiate rules with their parents as part of growing up; and the fact that social media bans hide the real problems – global and social crises – instead of solving them.
The Fallacy: a Social Media Ban Works
One phrase comes up often in this debate: “We have to do something.” The desire is understandable, but a blanket ban is a knee-jerk reaction that doesn’t address the real problems. A comparison: when there’s a lot of traffic on a road, we don’t lock children up at home for the foreseeable future. Instead, we make the roads safer with traffic lights, crossings and 30 km/h speed limits – thus making it safer for everyone.
The EU had considered the equivalent of some digital crosswalks and speed limits with its data protection and platform regulations, but now it is backtracking as part of its deregulation agenda. Not to mention that we’re letting major social networks off the hook by banning children from social media, instead of forcing them to change their practices.
Much of what is described as harmful to children is actually harmful to us all. Why don't we allow ourselves to demand a digital world that doesn't just want to profit from us and our time? Perhaps because we’re so caught up in the logic and inevitability of the big platforms that we can no longer even allow ourselves to imagine anything else. But this is not enough. It leaves politics lacking in vision and ultimately it is ineffective. It regulates and solidifies the status quo, instead of uplifting better alternatives.
A social media ban also means protecting children without offering them an alternative, all while we adults happily continue to indulge in surveillance capitalism right before their eyes. That simply can't work. A debate about child protection and children's rights online begins with listening: What kind of world do children and young people want today? And what role does technology play in it, if it were up to them?
Listening to their answers, seeing problems in all their complexity, and talking about alternatives—all of this is far more time-consuming than a ban. But it is the only way to give children what they themselves demand and what they are legally entitled to.
Further perspectives:
What I’ve written here focuses on the beliefs behind the debate for social media bans and what really lies behind them, but this is by no means the end of the story. Here are a few assessments from a technical, digital policy, children's rights, and social policy perspective:
- Morgan Briggs writes for Internet Exchange on the social media ban debate in the UK.
- D64 explains why the EUID wallet is not a good solution for age verification and why national social media bans conflict with EU law.
- The German Child Protection Association is against fixed age limits, and the German Social Welfare Association considers them “disrespectful to young people.”
- According to the German Education Foundation, a ban “does not do justice to the complex realities of digital environments and, in the long term, weakens the rights, resilience, and trust of young people.”
- 390 security and privacy researchers from 30 countries call for a moratorium on age assurance checks.
- The German Children's Fund provides constructive approaches for action and calls for “realistic policy approaches that both hold platforms accountable and provide safe digital spaces for young people and strengthen their resilience.”
- The Center for European Policy Analysis explains why social media bans enforced with age verification systems make European platform alternatives almost impossible, thereby strengthening Big Tech.
Elisa Lindinger is the Co-Founder of SUPERRR, a Berlin-based non-profit working to reframe the role of digital technologies, to be in service of justice and hope. Elisa bridges technology, the arts, and the humanities, focusing her research on the social impact of emerging technologies and feminist interventions to power imbalances in tech.
FabRiders' Session Design Lab
Conference and staff retreat season is fast approaching, and many of us are getting ready to design gatherings that genuinely support learning, reflection, and strategy, whether that’s at RightsCon or internal convenings and away days.
FabRiders' Session Design Lab, facilitated by IX contributor Dirk Slater, runs from 24–25 March (3–6 pm UTC). It is a hands-on, online space to turn any kind of session idea – workshops, retreats, coalition meetings, community calls, conferences – into an engaging, participatory design that really works for the people in the room. I took this course ahead of facilitating our successful session on Encryption and Feminism and I highly recommend it!
Want to appear here? Sponsor a newsletter.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Open Social Web
- The AT Protocol, used by Bluesky, represents a new hybrid model of decentralisation that combines elements of federation (like ActivityPub) and peer-to-peer systems (like Nostr) writes Bluesky CTO Paul Frazee. https://www.pfrazee.com/blog/practical-decentralization
- A team at New_ Public has released ALF (atproto Latency Fabric, not Alien Life Form), an open-source tool that lets apps built on the AT Protocol create drafts and schedule posts. https://leaflet.pub/p/did:plc:3vdrgzr2zybocs45yfhcr6ur/3mfuiu2yl4k2u
Internet Governance
- The ITU has launched a new focus group to explore how embodied AI systems: robots and other physical AI agents that can perceive, learn from, and interact with the real world. https://www.itu.int/en/ITU-T/focusgroups/eai/Pages/default.aspx
- Sarah Shoker, who led the Geopolitics Team at OpenAI, says frontier AI companies still don’t have clear or stable policies governing military uses of their models: leaving the public, and even their own employees, guessing about how these systems are being integrated into defence and intelligence operations. https://sarahshoker.substack.com/p/a-few-observations-on-ai-companies
- The US military is reportedly using Claude AI from Anthropic to analyse intelligence and help identify targets in strikes on Iran, raising new questions about how AI companies’ safety commitments apply once their models enter the national security pipeline. https://futurism.com/artificial-intelligence/claude-anthropic-military-iran
- Google plans to test changes to its search results to give rivals more prominence, seeking to avoid an EU fine for allegedly favoring its own services in searches for hotels, flights and restaurants. https://www.msn.com/en-us/news/technology/exclusive-google-to-test-changes-to-search-results-source-says-as-eu-fine-looms/ar-AA1X55m6
- At the AI Impact Summit in Delhi, policymakers and technologists argued that the future of AI governance will depend less on abstract principles and more on building shared data infrastructure that supports open, public-interest AI say Creative Commons' Anna Tumadóttir and Rebecca Ross. https://creativecommons.org/2026/03/04/ais-infrastructure-era
- Debates over banning social media for under-16s should look to the early internet: platforms like Club Penguin and Neopets offered child-focused digital spaces with pseudonymity, creativity, and built-in safety features that today’s algorithm-driven social media lacks, writes argues Ella Harvey. https://www.linkedin.com/pulse/bring-back-club-penguin-what-early-internet-tells-us-future-harvey-wxkse
- A new mixed-method study introduces “infrastructural anxiety” to explain how Dutch citizens and civil servants experience a loss of control over communication infrastructure, finding that people worry most about data security, feel largely powerless to drive systemic change, and increasingly push responsibility onto the state even as officials say real governance is constrained by global standards bodies and market concentration. https://firstmonday.org/ojs/index.php/fm/article/view/14530
- Meta’s content moderation AI is sending a flood of ‘junk’ tips to DoJ, US child abuse investigators say. https://www.theguardian.com/technology/2026/feb/25/meta-ai-junk-child-abuse-tips-doj
- Integrity Institute members submitted recommendations to the Oversight Board’s review of Meta’s account disabling policies, arguing that platforms must ensure stronger due process, transparency, and procedural fairness when suspending or banning users. https://www.integrityinstitute.org/research/metas-account-disabling-policies-oversight-board
Digital Rights
- WITNESS has submitted a public comment to the Meta Oversight Board on non-consensual AI sexualized impersonation. This case exposes what happens when a platform is told to fix a problem, documents its refusal, and the predicted harm materializes. https://www.witness.org/witness-submits-public-comment-to-meta-oversight-board-on-ai-generated-sexual-exploitation
- An open letter signed by 371 security and privacy researchers across 30 countries calls for a moratorium on rolling out age assurance online, arguing that current proposals are easy to circumvent, technically hard to deploy at internet scale, and likely to create major harms.https://csa-scientist-open-letter.org/ageverif-Feb2026
- The US Federal Trade Commission said it will not enforce penalties under the Children’s Online Privacy Protection Act (COPPA), which regulates how websites, apps, and online services collect and use children’s personal data, against companies that use age-verification technologies, provided the data collected is used only to confirm a user’s age, not retained afterward, and protected with appropriate safeguards. https://therecord.media/ftc-says-it-wont-enforce-coppa-age-verification
- The “agentic era” will require a single primary personal AI agent per person: a fiduciary system that mediates identity, consent, and attention as autonomous AI systems increasingly act on users’ behalf across the web. From CTO of Project Liberty, Braxton Woodham. https://medium.com/@tminusbraxton/a-primary-agent-for-the-agentic-era-f0099493df3f
Technology for Society
- Prediction markets are being promoted as tools for forecasting the future, but in practice they function more like financialised gambling platforms that extract money from users while amplifying speculation about real-world events, argues Ayesha A. Siddiqi. https://www.tank.tv/magazine/issue-106/features/expected-outcomes-by-ayesha-a-siddiqi
- The Cherokee Nation has created a nine-member task force to study the potential impacts of new data centres proposed in northeastern Oklahoma. https://www.kjrh.com/news/local-news/cherokee-nation-creates-data-center-task-force
- Ben Collins, CEO of the Onion, talks with Julia Angwin, Director of the Independent Media Lab at the Harvard Shorenstein Center at Knight Media Forum 2026 about Satire in public discourse. https://www.youtube.com/watch?v=EI2tOsa9AP0
- A handful of tech giants dominate the internet and harvest vast amounts of user data, but a growing ecosystem of more ethical alternatives offers ways to replace services from companies like Amazon, Google, X, Meta, and Apple. https://www.theguardian.com/technology/2026/feb/26/how-to-replace-amazon-google-x-meta-apple-alternatives
- AI coding tools are rapidly undermining the traditional economics of open source, making it easier for companies to recreate competing software without directly copying the original code. https://john.onolan.org/open-source-in-the-age-of-ai
- Penn’s new Center on Media, Technology and Democracy has published a research compendium that translates its quantitative work on the information ecosystem into something journalists, policymakers, and civil society can more easily use. https://infodem.upenn.edu/research
- The US government’s new Tech Force program aims to recruit 1,000 technologists for short-term public service roles with potential pathways into Big Tech, raising both hopes for modernisation and concerns about conflicts of interest. https://federalnewsnetwork.com/workforce/2026/03/opm-revives-defunct-gov-tech-efforts-with-tech-force-hires
Privacy and Security
- A Greek court has sentenced four people including Intellexa founder Tal Dilian in the Predator spyware scandal after prosecutors said the tool was used to hack and surveil dozens of Greek politicians, journalists, and officials in what became known as “Greek Watergate.” https://www.timesofisrael.com/israeli-spyware-maker-sentenced-to-8-years-in-greeces-predator-case
Upcoming Events
- Spear phishing: how to stay safe and prevent advanced attacks against civil society. March 10. https://www.accessnow.org/event/digital-security-webinars
- Hacks/Hackers AI x Journalism Day. March 16, Austin, TX. https://www.hackshackers.com/hacks-hackers-ai-x-journalism-day-in-austin-march-16
- All Tech Is Human is holding an all-day workshop in Manhattan on building an inter-party trust framework for AI. March 26, New York, NY. https://docs.google.com/forms/d/e/1FAIpQLSeidCs4-ifuG4iZxi5kh3QBAGuh_UQBksPIzhoTzabTDblaHg/viewform
- ATmosphereConf is THE global AT Protocol community conference. March 26-29, Vancouver, Canada and Online. https://atmosphereconf.org
- Palestine Digital Activism Forum (PDAF) 2026 hosted by 7amleh – The Arab Center for the Advancement of Social Media. An amazing speaker lineup! March 30-31, Online. https://events.ringcentral.com/events/pdaf-2026
- Mobilize supporters & build campaign momentum. A one-day online accelerator for NGOs stuck on how to mobilise people, build momentum, and turn ideas into action. April 15, Online. https://www.resource-alliance.org/event/creative-organising-lab
- Take Back Tech 3 is a gathering for organizers, artists, tech workers, academics, lawyers, and more to rally together and strategize our next power-building moves. April 17-19, Atlanta, GA. https://www.takebacktech.com
Careers and Funding Opportunities
- Signal: Director of Major Gifts. Remote. https://jobs.lever.co/signal/68f75269-fe43-4d25-8d82-69439351f14d
- The Center for Democracy & Technology (CDT): AI Governance Fellow. Washington, DC. https://cdt.org/careers/#op-690357-ai-governance-fellow
- Northeastern University’s College of Social Sciences and Humanities: Associate Director - Strategic Initiatives. Boston, MA. https://northeastern.wd1.myworkdayjobs.com/en-US/careers/job/Associate-Director---Strategic-Initiatives_R138846
- Ofcom: Principal, Online Safety Product. London, Manchester or Edinburgh, UK. https://ofcom.wd3.myworkdayjobs.com/Ofcom_Careers/job/London/Principal--Online-Safety-Product_JR2311
- Cloudflare: Trust & Safety - Senior Threat Investigations Analyst. Lisbon, Portugal. https://www.linkedin.com/jobs/view/4354109320
Opportunities to Get Involved
- The Community Privacy Residency is a 3-week residency bringing together researchers, builders, and community organizers to co-design and build open-source tools for community privacy, with a focus this year on Countersurveillance, Privacy & Cryptography in AI, and Community Infrastructure. Apply by March 29. https://cryptpad.fr/form/#/2/form/view/j52xiQwP9Lnq-B7FD-CQEUo0jP1Necp4p9ttjpbN-WA
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Comments ()