The UK Online Safety Act: Ofcom’s Age Checks Raise Concerns
New UK rules require “highly effective” age checks for adult content online, but the proposed methods risk undermining privacy, excluding vulnerable users, and expanding surveillance infrastructure.
By Audrey Hingle (with input from Mallory Knodel)
Under the UK’s Online Safety Act, websites and apps that host adult content must implement “highly effective” age checks starting next week to prevent users under 18 from accessing that content. This includes not just pornography websites, but also major social media platforms.
Restricting children’s access to harmful content such as pornography or material promoting self-harm is a noble aim. But as with many seemingly straightforward tech policies, the reality is more complicated. Age verification mandates may appear like a common-sense safeguard. In practice, it threatens to create serious privacy, security, and access risks; not just for kids, but for everyone. Mandates for strict age verification could ultimately do more harm than good.
Age Checks Raise Technical and Human Rights Concerns
Ofcom has listed seven age verification methods that platforms may use, but each presents serious flaws that undermine privacy, equity, and effectiveness.
- Facial age estimation – you show your face via photo or video, and technology analyses it to estimate your age.
While promoted as a scalable and user-friendly tool, facial age estimation is riddled with accuracy issues. These systems often perform poorly for children, teens, and people of color, especially under variable lighting conditions or with low-quality cameras.
Beyond accuracy, the approach raises deep privacy concerns. Facial scans are considered sensitive biometric data under GDPR. Requiring children to submit this data in order to access online content is not only invasive but could normalize surveillance. It also puts sensitive data at risk of leaks or misuse, potentially compromising children's digital safety for years to come. - Open banking – you give permission for the age-check service to securely access information from your bank about whether you are over 18. The age-check service then confirms this with the site or app.
With this method, users give an age-check service permission to access their banking data to confirm they are over 18. While secure in theory, this approach creates a troubling precedent: using financial data as a proxy for age. It assumes access to a bank account, excluding unbanked users and those without access to financial services including many teens, low-income individuals, and undocumented people.
Using banking data for age checks also creates a significant privacy intrusion. Financial records reveal more than just age, and users may not fully understand or consent to how their data is accessed and processed. Once shared, this sensitive information can be misused or breached. - Digital identity services – these include digital identity wallets, which can securely store and share information which proves your age in a digital format.
While convenient, these tools present major risks. Because they make sharing personal information easier, they may encourage platforms to collect more data than necessary, violating the data minimization principle under GDPR.
They also expand the "threat surface" for breaches. If a phone storing a digital ID is lost, stolen, or compromised by malware, all the sensitive data inside becomes vulnerable. This is particularly risky for children and teens who may not understand the consequences of poor digital hygiene. - Credit card age checks – you provide your credit card details and a payment processor checks if the card is valid. As you must be over 18 to obtain a credit card this shows you are over 18.
Credit cards are not reliable indicators of age. Many children use cards issued to them by their parents. From a privacy standpoint, providing credit card information or other financial data just to access a website is an overreach. These documents often contain sensitive information unrelated to age and increase the risk of fraud and identity theft. Low-income users and those without access to credit may also be excluded altogether. - Email-based age estimation – you provide your email address, and technology analyses other online services where it has been used – such as banking or utility providers - to estimate your age.
This represents a newer, more expansive approach to email-based verification. While less overtly invasive than biometric methods, this technique still raises significant concerns.
It relies on third-party linkages that may be opaque to users, and it grants outsized power to email providers or data brokers who can infer age from digital behavior. This model introduces serious data protection and consent issues: users may not know what data is being collected, how it’s being analysed, or whether inferences are accurate.
Like institutional email requirements, it is both overinclusive and underinclusive. It may exclude users with limited digital footprints: young people, shared device users, or those who rely on family email accounts, and misclassify others based on incidental associations with adult-linked services. The result is a high risk of false positives and arbitrary denials of access, especially for marginalised or low-income users. Rather than offering trustworthy verification, it may deepen inequities while expanding surveillance infrastructure - Mobile network operator age checks – you give your permission for an age-check service to confirm whether or not your mobile phone number has age filters applied to it. If there are no restrictions, this confirms you are over 18.
This might sound like a streamlined solution, but it introduces serious privacy and power concentration issues. It gives telecom companies and operating systems like Apple and Google greater control over user data and online access. It also opens the door to more targeted advertising based on verified age, which undermines user privacy. Since this approach is tied to a specific phone number or device, it may not apply to web-based services or shared devices, resulting in inconsistent enforcement. - Photo-ID matching – this is similar to a check when you show a document. For example, you upload an image of a document that shows your face and age, and an image of yourself at the same time – these are compared to confirm if the document is yours.
While common in some financial and legal settings, this level of identity verification is excessive for general online use. Requiring users to upload government-issued identity documents, credit cards, or other hard documentation to verify age introduces serious privacy, equity, and safety concerns. These documents typically contain much more information than is necessary to confirm a user’s age, such as full name, address, photo, and physical characteristics like height and weight. Processing and storing this data creates an expanded attack surface for data breaches and increases the risk of surveillance or misuse. These risks are not hypothetical. In 2023 alone, over 17 billion personal records were compromised worldwide. It also conflicts with data minimization principles outlined in privacy laws like the GDPR and CPRA. For individuals like migrants, low-income users and youth who do not possess formal ID, this method becomes a barrier to participation online.
There Are Better Solutions than Age Checks
Policymakers and platforms should pursue approaches grounded in privacy, technical feasibility, and human rights. Age verification alone will not eliminate online harms, and when overused, it can introduce new risks to user autonomy, safety, and inclusion. Below are four principles that can help build a safer internet for young people without compromising core digital rights.
- Embrace privacy by design
Reducing harm to children online requires a holistic, incremental, and collaborative approach. This includes designing with privacy in mind from the start. Most general-use social media platforms already include some form of age gating. A common method is self-reported age: when users sign up, they enter their birthdate. This approach is imperfect. People can lie. But it minimizes data collection and reflects a growing industry-wide trend toward privacy-first design. Platforms should continue to support and improve self-declared age systems, combined with strong default protections for teen accounts. For example, limiting data visibility, exposure to recommendations, and interactions from unknown users can provide meaningful protection without requiring intrusive identity checks.
Recent transparency reporting from Australia’s eSafety Commissioner reinforces this direction, highlighting that meaningful progress in protecting children online requires coordinated efforts across industry, government, families, and communities, rather than relying solely on rigid enforcement or technical mandates and that no single solution is suitable for all contexts.
A privacy-respecting model would reserve invasive verification for the most sensitive use cases, where the risks of harm are high and enforcement is proportionate. Even then, platforms should adopt the strongest possible data minimization and security practices. - Use content controls, not identity checks
User-agency is a key pillar of modern safety strategies. Platforms are increasingly being designed around user-controlled content moderation, especially for older teens. These tools can also support younger users, even if they are using the platform in violation of its stated minimum age policy. Filters, message controls, muting, blocking, and flagging mechanisms allow users to tailor their experiences and reduce exposure to harmful content.
Such tools can also be applied to community enforcement, for example by flagging content from creators who violate community guidelines. This strategy improves user safety without expanding surveillance or collecting unnecessary personal data. - Avoid one-size-fits-all mandates
Blanket laws that require all users to verify their age often harm those already at the margins. Undocumented people, unbanked users, individuals with disabilities, and those who lack formal IDs may be excluded altogether. Age verification systems are often inaccurate, vulnerable to circumvention, and disproportionately impact those with the least access to government services.
In practice, these laws are also difficult to apply consistently across platforms and user histories. Longstanding users who signed up before any regulation was introduced may not respond to new prompts to disclose their age. If platforms deactivate or restrict these accounts, they risk significant harm to users' digital social lives and support networks. Placing such accounts on hold until the user “ages up” is one possibility, but it introduces ethical and legal complications that most technical proposals fail to address. - Focus on bad actors, not just access
The majority of harm to children online stems not from falsified age information, but from abusive behavior, algorithmically amplified content, or lack of effective moderation. Rather than gatekeeping access based on age, platforms and regulators should invest in detecting and mitigating these underlying risks. Strengthening trust and safety teams, building better reporting tools, and auditing algorithms for systemic harms can all reduce exposure to dangerous content or predatory behavior. - Address platform equities by restricting ads targeted to youth
Instead of allowing platforms to use age inference to fine tune ad delivery, increasing their profits, we could place restrictions on platforms based on these same inference tools. Specifically, we could prevent companies from targeting ads to users under 18 or under 13. This solution may be more durable and compelling for the platform, because any unauthorized minor would deplete value from the platform itself as a user base that increases cost by using the service, but does not generate revenue.
Conclusion
Ofcom’s own website advises users to "exercise a degree of caution and judgement" when handing over personal data, yet their framework effectively forces users to share sensitive information in order to use the internet.
Mandatory age checks may be intended to protect young people, but they could backfire. By building systems that collect more personal data, we risk undermining core principles of modern privacy law like user consent, data minimization, and proportionality.
Children should not have to surrender their identity to participate online. If mishandled or breached, these systems could harm not just their privacy, but their long-term trust in digital life.
🎶 IETF 123 (as simple as TCP/IP) 🎶
The Internet Engineering Task Force is back for IETF 123, taking place July 20–26 in Madrid and online. With sessions on post-quantum cryptography, routing, real-time communications, and human rights, it’s one of the best places to see how the technical future of the internet is taking shape.
Things kick off with the Hackathon and registration on Saturday, July 19, followed by newcomer sessions and working groups starting Sunday. Most sessions are open and available online, so it’s easy to drop in and follow the discussions that interest you.
IX’s Mallory Knodel is co-chairing the Human Rights Protocol Considerations Research Group with Sofia Celi. Their session will feature talks from Chantal Joris from ARTICLE 19 on the constraints and considerations of legal frameworks in armed conflict, a speaker from Ainita.net on censorship in Iran, and Maria Farrell author of "We Need to Rewild the Internet," on Monday, July 21 at 5pm (UTC+2). If you’re attending in person, stop by and say hello, or join remotely.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Open Social Web
- A group of European technology entrepreneurs has unveiled the Eurosky initiative, a project to create infrastructure for social media offerings and reduce reliance on US tech giants. It plans to use a decentralized moderation platform, similar to that behind Bluesky. https://www.reuters.com/business/media-telecom/european-project-eurosky-aims-reduce-reliance-us-tech-giants-2025-07-15
Internet Governance
- The European Commission’s proposed Digital Networks Act could jeopardize core principles of internet governance in Europe: net neutrality, regulatory independence, and equitable connectivity, warn civil society groups and ARTICLE 19. https://www.article19.org/resources/eu-the-dna-of-europes-connectivity-at-stake
- A new risk analysis warns that adversaries can indirectly undermine critical infrastructure by weaponizing disinformation to manipulate public behavior. https://onlinelibrary.wiley.com/doi/10.1111/risa.70062?af=R
- On the Tech Won't Save Us podcast, Paris Marx is joined by Laleh Khalili to discuss how the United States uses its control of key technologies to shift global power dynamics, and how that specifically plays out in the Middle East. https://techwontsave.us/episode/284_how_the_us_weaponizes_tech_in_the_middle_east_w_laleh_khalili
- A new report by the Knight-Georgetown Institute shows how the US government’s proposed Google Chrome divestiture is technically feasible. https://kgi.georgetown.edu/research-and-commentary/technical-feasibility-of-divesting-google-chrome
- People with disabilities, those living in poverty or who have serious health conditions are being left in a bureaucratic limbo due to digital exclusion caused by the Department of Work and Pensions’ unchecked roll-out of technologies finds Amnesty International. https://www.amnesty.org/en/latest/news/2025/07/uk-governments-unchecked-use-of-tech-and-ai-systems-leading-to-exclusion-of-people-with-disabilities-and-other-marginalized-groups
- For fifth time, China blocks Wikimedia Foundation as permanent observer to the World Intellectual Property Organization. https://wikimediafoundation.org/news/2025/07/09/china-block-wikimedia-wipo
- Useful! If you’re based in the UK, Ofcom’s mobile checker shows which network offers the best 4G or 5G signal where you need it most. https://www.ofcom.org.uk/mobile-coverage-checker
Digital Rights
- AI Forensics and eight other civil society groups have filed a formal DSA complaint against 𝕏, alleging unlawful use of sensitive personal data for targeted ads. https://mailchi.mp/aiforensics/joint-statement-on-suspected-dsa-violations-by?e=dcad64b917
- India’s growing manosphere is fuelling gendered disinformation, online abuse, and anti-feminist narratives that threaten democratic discourse and women’s rights warns Rohini Lakshané. https://genderit.org/feminist-talk/challenging-gendered-disinformation-indias-manosphere-requires-systemic-change
- ProPublica has obtained the blueprint for the Trump administration’s unprecedented plan to turn over IRS records to Homeland Security in order to speed up the agency’s mass deportation efforts. https://www.propublica.org/article/trump-irs-share-tax-records-ice-dhs-deportations
- Just hours before delivering her keynote at the UN’s AI for Good Global Summit, AI ethics researcher and friend of the newsletter Abeba Birhane was pressured to censor slides referencing Palestine, genocide, and Big Tech. https://thebulletin.org/2025/07/ai-for-good-with-caveats-how-a-keynote-speaker-was-censored-during-an-international-artificial-intelligence-summit
- A new study from CDT finds that content moderation systems are failing speakers of low-resource languages in the Global South. https://cdt.org/insights/content-moderation-in-the-global-south-a-comparative-study-of-four-low-resource-languages
Technology for Society
- Fed up with ChatGPT, dozens of organizations in Latin America have partnered to develop a large language model that better understands their cultural and linguistic nuances. https://restofworld.org/2025/chatgpt-latin-america-alternative-latamgpt
- AI is being weaponized to enforce austerity and dismantle democracy in the U.S., particularly under the Trump administration’s alliance with Big Tech warns Kevin De Liban. https://www.techpolicy.press/austerity-intelligence
- Amid growing stigma and internal hierarchies in the digital sex industry, young German OnlyFans creators are developing nuanced strategies to protect their identities and challenge societal norms finds Swana Schuchmann. https://onlinelibrary.wiley.com/doi/10.1111/gwao.70010
- A Columbia dropout who used AI to cheat his way into Big Tech internships is now leading a VC-backed startup that promises to “cheat on everything.” https://www.theatlantic.com/technology/archive/2025/07/ai-radicalization-civil-war/683460
- New research from Maximilian Pieper argues that data isn’t just something digital. It’s a real, material process that reduces people and the planet to useful bits, much like mining or factory work. https://link.springer.com/article/10.1007/s00146-025-02444-1
- Femtech companies and women’s health organisations are routinely censored on social media for using basic anatomical terms, an example of how platform policies continue to sexualise and suppress women’s bodies, writes Lucy Purdon. https://courageeverywhere.substack.com/p/why-cant-you-say-vagina-on-social
- Cory Doctorow and Maria Farrell celebrate Open Rights Group's 20th anniversary in a convo about surveillance capitalism, and the 'enshittification’ of digital platforms. https://mas.to/@cubicgarden/114864040608743301
Privacy and Security
- Ireland’s upcoming age verification rules for social media are poorly conceived and dangerously invasive. These measures may require users to upload identity documents or biometric data just to access platforms, posing serious threats to privacy and enabling surveillance capitalism warns Simon McGarr. https://www.thegist.ie/the-gist-age-verification-is-an-epic-fail
- An elite Chinese cyberspy group hacked at least one state’s National Guard network for nearly a year, the Department of Defense has found. https://www-nbcnews-com.cdn.ampproject.org/c/s/www.nbcnews.com/news/amp/rcna218648
- A recent audit from the US Department of Justice has exposed severe vulnerabilities in the FBI's cybersecurity measures, which directly contributed to the deaths of key informants in the high-profile El Chapo investigation. https://www.secureworld.io/industry-news/fbi-breach-deaths-el-chapo
- The Spanish government is using Huawei to manage and store judicially authorized wiretaps in the country despite concerns about how the Chinese government could compel Huawei to assist Beijing with its own intelligence activities. https://therecord.media/spain-awards-contracts-huawei-intelligence-agency-wiretaps
Upcoming Events
- Deep dive session: Vulnerability Handling. July 22, 1:00pm CET. Online. https://www.stan4cra.eu/event-details/deep-dive-session-vulnerability-handling
- At BlackHat USA 2025 join Ronald Dibert for a keynote on the history of The Citizen Lab, their investigations into the abuse of mercenary spyware and other sleuthing stories, and what keeps him up at night these days (answer: a lot). August 6, 1:30pm PT. Las Vegas, NV. https://www.blackhat.com/us-25/briefings/schedule/#keynote-chasing-shadows-chronicles-of-counter-intelligence-from-the-citizen-lab-48196
- You can now register for W3C TPAC 2025, W3C's major event of the year, which gathers the community for thought-provoking discussions and coordinated work to advance the invaluable work of our groups. 28 October / 10–14 November. Kobe, Japan & online. https://www.w3.org/2025/11/TPAC
Careers and Funding Opportunities
- Cira: Public Affairs Specialist. Ottawa, CA (Hybrid). https://cira.bamboohr.com/careers/292
- Equality Fund: Request for Proposal: Feminist Digital Security and Holistic Protection Training. Remote. https://equalityfund.bamboohr.com/careers/103?source=aWQ9MTA=
- The Alan Turing Institute: Open Source AI Fellowship Call 2025. London, UK. https://www.turing.ac.uk/work-turing/open-source-ai-fellowship-call-2025
Opportunities to Get Involved
- #PrivacyCamp25: Registrations open and call for sessions deadline! CFP closes July 21. Privacy Camp 25 takes place September 30. Brussels, BE. https://privacycamp.eu/2025/07/15/privacycamp25-registrations-open-and-call-for-sessions-deadline
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.