The UK Online Safety Act: Ofcom’s Age Checks Raise Concerns

New UK rules require “highly effective” age checks for adult content online, but the proposed methods risk undermining privacy, excluding vulnerable users, and expanding surveillance infrastructure.

The UK Online Safety Act: Ofcom’s Age Checks Raise Concerns
Photo by Lior / Unsplash

By Audrey Hingle (with input from Mallory Knodel)

Under the UK’s Online Safety Act, websites and apps that host adult content must implement “highly effective” age checks starting next week to prevent users under 18 from accessing that content. This includes not just pornography websites, but also major social media platforms.

Restricting children’s access to harmful content such as pornography or material promoting self-harm is a noble aim. But as with many seemingly straightforward tech policies, the reality is more complicated. Age verification mandates may appear like a common-sense safeguard. In practice, it threatens to create serious privacy, security, and access risks; not just for kids, but for everyone. Mandates for strict age verification could ultimately do more harm than good.

Age Checks Raise Technical and Human Rights Concerns

Ofcom has listed seven age verification methods that platforms may use, but each presents serious flaws that undermine privacy, equity, and effectiveness.

  1. Facial age estimation – you show your face via photo or video, and technology analyses it to estimate your age.

    While promoted as a scalable and user-friendly tool, facial age estimation is riddled with accuracy issues. These systems often perform poorly for children, teens, and people of color, especially under variable lighting conditions or with low-quality cameras.

    Beyond accuracy, the approach raises deep privacy concerns. Facial scans are considered sensitive biometric data under GDPR. Requiring children to submit this data in order to access online content is not only invasive but could normalize surveillance. It also puts sensitive data at risk of leaks or misuse, potentially compromising children's digital safety for years to come.
  2. Open banking – you give permission for the age-check service to securely access information from your bank about whether you are over 18. The age-check service then confirms this with the site or app.

    With this method, users give an age-check service permission to access their banking data to confirm they are over 18. While secure in theory, this approach creates a troubling precedent: using financial data as a proxy for age. It assumes access to a bank account, excluding unbanked users and those without access to financial services including many teens, low-income individuals, and undocumented people.

    Using banking data for age checks also creates a significant privacy intrusion. Financial records reveal more than just age, and users may not fully understand or consent to how their data is accessed and processed. Once shared, this sensitive information can be misused or breached.
  3. Digital identity services – these include digital identity wallets, which can securely store and share information which proves your age in a digital format.

    While convenient, these tools present major risks. Because they make sharing personal information easier, they may encourage platforms to collect more data than necessary, violating the data minimization principle under GDPR.

    They also expand the "threat surface" for breaches. If a phone storing a digital ID is lost, stolen, or compromised by malware, all the sensitive data inside becomes vulnerable. This is particularly risky for children and teens who may not understand the consequences of poor digital hygiene.
  4. Credit card age checks – you provide your credit card details and a payment processor checks if the card is valid. As you must be over 18 to obtain a credit card this shows you are over 18.

    Credit cards are not reliable indicators of age. Many children use cards issued to them by their parents. From a privacy standpoint, providing credit card information or other financial data just to access a website is an overreach. These documents often contain sensitive information unrelated to age and increase the risk of fraud and identity theft. Low-income users and those without access to credit may also be excluded altogether.
  5. Email-based age estimation – you provide your email address, and technology analyses other online services where it has been used – such as banking or utility providers - to estimate your age. 

    This represents a newer, more expansive approach to email-based verification. While less overtly invasive than biometric methods, this technique still raises significant concerns.

    It relies on third-party linkages that may be opaque to users, and it grants outsized power to email providers or data brokers who can infer age from digital behavior. This model introduces serious data protection and consent issues: users may not know what data is being collected, how it’s being analysed, or whether inferences are accurate.

    Like institutional email requirements, it is both overinclusive and underinclusive. It may exclude users with limited digital footprints: young people, shared device users, or those who rely on family email accounts, and misclassify others based on incidental associations with adult-linked services. The result is a high risk of false positives and arbitrary denials of access, especially for marginalised or low-income users. Rather than offering trustworthy verification, it may deepen inequities while expanding surveillance infrastructure
  6. Mobile network operator age checks – you give your permission for an age-check service to confirm whether or not your mobile phone number has age filters applied to it. If there are no restrictions, this confirms you are over 18.

    This might sound like a streamlined solution, but it introduces serious privacy and power concentration issues. It gives telecom companies and operating systems like Apple and Google greater control over user data and online access. It also opens the door to more targeted advertising based on verified age, which undermines user privacy. Since this approach is tied to a specific phone number or device, it may not apply to web-based services or shared devices, resulting in inconsistent enforcement.
  7. Photo-ID matching – this is similar to a check when you show a document. For example, you upload an image of a document that shows your face and age, and an image of yourself at the same time – these are compared to confirm if the document is yours.

    While common in some financial and legal settings, this level of identity verification is excessive for general online use. Requiring users to upload government-issued identity documents, credit cards, or other hard documentation to verify age introduces serious privacy, equity, and safety concerns. These documents typically contain much more information than is necessary to confirm a user’s age, such as full name, address, photo, and physical characteristics like height and weight. Processing and storing this data creates an expanded attack surface for data breaches and increases the risk of surveillance or misuse. These risks are not hypothetical. In 2023 alone, over 17 billion personal records were compromised worldwide. It also conflicts with data minimization principles outlined in privacy laws like the GDPR and CPRA. For individuals like migrants, low-income users and youth who do not possess formal ID, this method becomes a barrier to participation online.

There Are Better Solutions than Age Checks

Policymakers and platforms should pursue approaches grounded in privacy, technical feasibility, and human rights. Age verification alone will not eliminate online harms, and when overused, it can introduce new risks to user autonomy, safety, and inclusion. Below are four principles that can help build a safer internet for young people without compromising core digital rights.

  1. Embrace privacy by design
    Reducing harm to children online requires a holistic, incremental, and collaborative approach. This includes designing with privacy in mind from the start. Most general-use social media platforms already include some form of age gating. A common method is self-reported age: when users sign up, they enter their birthdate. This approach is imperfect. People can lie. But it minimizes data collection and reflects a growing industry-wide trend toward privacy-first design. Platforms should continue to support and improve self-declared age systems, combined with strong default protections for teen accounts. For example, limiting data visibility, exposure to recommendations, and interactions from unknown users can provide meaningful protection without requiring intrusive identity checks.

    Recent transparency reporting from Australia’s eSafety Commissioner reinforces this direction, highlighting that meaningful progress in protecting children online requires coordinated efforts across industry, government, families, and communities, rather than relying solely on rigid enforcement or technical mandates and that no single solution is suitable for all contexts.

    A privacy-respecting model would reserve invasive verification for the most sensitive use cases, where the risks of harm are high and enforcement is proportionate. Even then, platforms should adopt the strongest possible data minimization and security practices.
  2. Use content controls, not identity checks
    User-agency is a key pillar of modern safety strategies. Platforms are increasingly being designed around user-controlled content moderation, especially for older teens. These tools can also support younger users, even if they are using the platform in violation of its stated minimum age policy. Filters, message controls, muting, blocking, and flagging mechanisms allow users to tailor their experiences and reduce exposure to harmful content.

    Such tools can also be applied to community enforcement, for example by flagging content from creators who violate community guidelines. This strategy improves user safety without expanding surveillance or collecting unnecessary personal data. 
  3. Avoid one-size-fits-all mandates
    Blanket laws that require all users to verify their age often harm those already at the margins. Undocumented people, unbanked users, individuals with disabilities, and those who lack formal IDs may be excluded altogether. Age verification systems are often inaccurate, vulnerable to circumvention, and disproportionately impact those with the least access to government services. 

    In practice, these laws are also difficult to apply consistently across platforms and user histories. Longstanding users who signed up before any regulation was introduced may not respond to new prompts to disclose their age. If platforms deactivate or restrict these accounts, they risk significant harm to users' digital social lives and support networks. Placing such accounts on hold until the user “ages up” is one possibility, but it introduces ethical and legal complications that most technical proposals fail to address. 
  4. Focus on bad actors, not just access
    The majority of harm to children online stems not from falsified age information, but from abusive behavior, algorithmically amplified content, or lack of effective moderation. Rather than gatekeeping access based on age, platforms and regulators should invest in detecting and mitigating these underlying risks. Strengthening trust and safety teams, building better reporting tools, and auditing algorithms for systemic harms can all reduce exposure to dangerous content or predatory behavior.
  5. Address platform equities by restricting ads targeted to youth
    Instead of allowing platforms to use age inference to fine tune ad delivery, increasing their profits, we could place restrictions on platforms based on these same inference tools. Specifically, we could prevent companies from targeting ads to users under 18 or under 13. This solution may be more durable and compelling for the platform, because any unauthorized minor would deplete value from the platform itself as a user base that increases cost by using the service, but does not generate revenue. 

Conclusion

Ofcom’s own website advises users to "exercise a degree of caution and judgement" when handing over personal data, yet their framework effectively forces users to share sensitive information in order to use the internet.

Mandatory age checks may be intended to protect young people, but they could backfire. By building systems that collect more personal data, we risk undermining core principles of modern privacy law like user consent, data minimization, and proportionality.

Children should not have to surrender their identity to participate online. If mishandled or breached, these systems could harm not just their privacy, but their long-term trust in digital life.


🎶 IETF 123 (as simple as TCP/IP) 🎶

The Internet Engineering Task Force is back for IETF 123, taking place July 20–26 in Madrid and online. With sessions on post-quantum cryptography, routing, real-time communications, and human rights, it’s one of the best places to see how the technical future of the internet is taking shape.

Things kick off with the Hackathon and registration on Saturday, July 19, followed by newcomer sessions and working groups starting Sunday. Most sessions are open and available online, so it’s easy to drop in and follow the discussions that interest you.

IX’s Mallory Knodel is co-chairing the Human Rights Protocol Considerations Research Group with Sofia Celi. Their session will feature talks from Chantal Joris from ARTICLE 19 on the constraints and considerations of legal frameworks in armed conflict, a speaker from Ainita.net on censorship in Iran, and Maria Farrell author of "We Need to Rewild the Internet," on Monday, July 21 at 5pm (UTC+2). If you’re attending in person, stop by and say hello, or join remotely.

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

Open Social Web

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

Opportunities to Get Involved

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share!

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe