Can Licensing Save the Content Economy?
Two views on licensing and the creator economy.
This week, Sonia Hendy and Nick Sullivan offer different perspectives on the role licensing should play in restoring the financial incentives for web content creators.
Hendy, a creator herself and founder of an AI licensing infrastructure platform, is the more bullish of the two, arguing that workable infrastructure already exists and that creators cannot afford to wait for perfect policy.
Sullivan, an applied cryptographer and internet standards veteran who sits on the Internet Architecture Board, is more bearish, warning that licensing alone cannot fix what is fundamentally a governance problem.
Both agree it is not enough.
License-first? Mind the Gap!
By Sonia Hendy
After watching independent creators lose control of their work to extractive AI scraping, I realized that licensing infrastructure, not another government consultation, could change that. Even without perfect policy.
Like many people in the creative industries in the UK, I waited for last Wednesday's government report on copyright and AI with both excitement and anxiety. I’d tried to ignore the rumblings in the press about the likelihood of a deferral rather than a workable outcome; I was still poring over the House of Lords report, and glowing from the European Parliament’s Voss vote that validated the licensing-first argument at a major institutional level the week before. Could we expect something extraordinary? Would the economic power of the creative industries triumph against the AI-first dominance of the past two years? Would it deliver something with the gusto and confidence of the UK government’s quantum investment just 24 hours before?
Spoiler alert – it did not. It did dismiss its most contentious positioning (the opt-out text and data mining exception), but it struggled to offer anything more – no legislative commitment, no enforcement timelines, no clear signal of what AI companies compliance looks like. The 100+ pages merely promised to gather more evidence, review and monitor.
The report observed that the ‘licensing market continues to grow to the benefit of right holders and AI developers, though limited public information on licensing deals makes it difficult to fully assess their impact’, and didn’t consider there to be ‘sufficient evidence to justify government intervention’. In other words, the spoils continue to go to organizations with the catalog depth, scale and legal resources to negotiate directly. This sits among a gallery of the usual suspects to avoid mandating a license-first approach; a lack of agreed technical standards; the argument that there’s sufficient legislation in place, whilst promising further consultations to solve others, notably digital replicas; and the damning acknowledgement of a ‘more limited prospect of a direct licensing market developing for individual right holders and SME right holders’.
It's almost exactly the same stagnancy the creative industries have been floating in since the consultation’s inception, during which creators have continued to lose ground unless they happen to be part of the aforementioned closed door deals…So how do we proceed? Firstly, the absence of a perfect universal technical standard is not the same as the absence of workable ones; ISCC fingerprinting, C2PA Content Credentials and RSL (mentioned in the report) already exist and function now. Infrastructure that waits for legislative and technical perfection will still be waiting when the EU AI Act enforcement deadline arrives in August, and far, far beyond.
Secondly, the gravy train of publicly available training data will run dry sometime between now and 2032; training AI on AI-generated content leads to model collapse – human content is the foundation the technology depends on – and deals are being made.
Thirdly, existing contracts and legislation enforcement across the creative industries varies wildly; and as Creative Industries Policy and Evidence Centre asserted, copyright holders may 'retain little control over their terms and conditions, with the potential for legal rights being overwritten via private contracting'. These analogue approaches don’t enable the transparency to license (or prohibit use of) creative outputs in the AI age. This is the gap independently identified by three landmark reports; the House of Lords Communications and Digital Committee called for a licensing-first ecosystem; the European Parliament adopted the Voss report by a landslide, explicitly recognizing collective management organizations as central to any functioning framework; and now Westminster’s report indicates the same direction whilst omitting a timeline (or vehicle) to get there. Some may argue that no policy is better than a bad one – but waiting for the caprices of the government’s vague timelines, the potential volatility caused by cabinet reshuffles, the uncertainty of legal outcomes (especially the Getty vs. Stability AI appeal) to be resolved means it could take years for smaller and independent creators to gain any control over their art, consent for its use, and compensation for their craft.
The relentless and passionate campaigns of UK rights organizations undoubtedly shifted the landscape of this report – but campaigns alone do not get creators paid, or protect their work from further unfettered use – they need infrastructure.
As a creative, I grew tired of these circular arguments and built Magpie Standard in this uncertain landscape as a boot-strapped founder who wanted to offer a solution that intersects legislation, IP and creator rights. Our infrastructure is research-based (embodying the Access, Control, Consent, Compensation and Transparency (ACCCT) framework – cited multiple times in the House of Lords report); interoperable at its core; can be integrated into existing rights organizations; offers AI companies a route to access (compliant) machine-readable licensed human creativity; and most importantly it offers creators control now – not in three years’ time. The waitlist for our creator direct platform is already open…
The question remains – are we brave enough to ‘mind the gap’ without waiting for the ‘perfect’ policy? Or will we simply watch the gap widen and let the creative economy be consumed by it?
Sonia Hendy is the founder of The Hendy Collective and creator of Magpie Standard, an AI licensing infrastructure platform for rights organizations. A published poet, songwriter, and fiction writer with bylines in The Guardian and HuffPost UK, she brings two decades across higher education leadership, transformation consultancy, and creative practice to the question of how creators license their work for AI training.
Beyond Licensing: Why the Open Web Needs Governance, Not Just Deals
Licensing deals between AI companies and content publishers can't fix the web's AI governance problem because much of what AI actually trains on can't be licensed in the first place.
Licensing is a rational response to a real problem. Professionally produced content is being copied at scale for model training, often in ways that reduce referral traffic and revenue. Because of this, large publishers and AI firms have already cut deals: Reuters has reported on Reddit's reported $60 million-a-year agreement with Google and the Wikimedia Foundation's paid enterprise-access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral. Those arrangements make sense for institutions with bargaining power, recognizable inventories, and ongoing demand. Still, they also reveal the model's limits: a governance system built around licensing will privilege actors already organized to sell access.
There is also a deeper structural problem with a licensing-only approach. As WIPO puts it, "copyright laws are territorial." The Internet is not. For example, in the EU, text and data mining of lawfully accessible works is permitted unless rightsholders have expressly reserved their rights. In the United States, courts are still drawing lines around fair use and market harm in AI training cases. Licensing-first solutions often assume a stable, global legal baseline that does not exist. A meaningful fraction of what AI models actually train on sits in categories where copyright's reach is limited or unclear. The US Copyright Office is direct on this: copyright "does not protect facts, ideas, systems, or methods of operation." User-generated content, community-maintained reference works, and factual material cannot be licensed away because, in many cases, there is nothing to license.
Licensing also risks centralizing the wrong things. A licensing regime turns the Internet from a system that rewards "who gets heard" into one that rewards "who can invoice." Australia's parliamentary review of its News Media Bargaining Code found that the bulk of funding from commercial agreements went to legacy outlets, deepening disadvantages for smaller publishers. Pew found that the most frequently cited sources in Google AI summaries were Wikipedia, YouTube, and Reddit, and only 5% linked to news sites. The biggest publishers and platforms will have the clearest leverage. This compounds with the fact that there is a mismatch between what content is important for AI and who can monetize through licensing. Major foundation model training pipelines rely on web-scale crawls. Much of the material AI systems actually value is collaborative, user-generated, or community-maintained, and it rarely comes packaged in the clean rights bundles collective licensing assumes. Common Crawl alone spans over 300 billion pages across millions of distinct sites. The vast majority of those sites will never be party to a licensing deal, and a system that routes value only through bilateral agreements will deepen the centralization it claims to remedy.
The answer has to be broader than compensation. We need open technical standards that let sites, creators, and users express preferences about automated consumption in a machine-readable way, and we need model providers to respect them. The emerging crawler controls show what a norms-based Internet could look like. OpenAI lets sites allow OAI-SearchBot for search while disallowing GPTBot for training. Apple lets publishers direct it not to use their site content for model training, and lets individuals object to the crawling of URLs that contain their personal data. Google is less clear-cut: AI is now part of Search, but they still allow sites to differentiate between crawler use of data via Google-Extended.
The EU framework already points in that direction, and the W3C's TDM Reservation Protocol is designed to express the reservation of mining rights and to ease the discovery of associated policies. The IETF's AIPREF work builds on the current robots.txt standard and is developing a standard vocabulary for AI usage preferences that extends well beyond copyright. Even the UK Parliament committee arguing for a licensing-first approach says government should back "open, globally aligned standards" for rights reservation, provenance, and labelling.
There is still a long way to go to establish these rules and standards universally. Researchers auditing consent signals across 14,000 domains underlying major training corpora describe the current situation as "symptoms of ineffective web protocols," with widespread inconsistencies between what sites intend and what robots.txt and other signals actually convey. Being open to being read, cited, or indexed is not the same as consenting to model training. Licensing should remain one tool in the stack, but a stable equilibrium for the AI web requires transparency, provenance, privacy, and meaningful consent for both people and publishers.
Nick Sullivan is an applied cryptographer, security researcher, and entrepreneur. He founded Cloudflare Research, co-authored several TLS extensions, is a member of the Internet Architecture Board, and holds several roles at the IETF where he works on the standards that shape how the Internet operates.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Open Social Web
- Soft launching Eurosky. A personal account for the web. Most people use it today as an entry point to Bluesky, but it's much bigger than this. In the coming years it could become people's main online identity says its co-lead Sherif Elsayed-Ali. https://www.linkedin.com/pulse/future-social-media-you-app-sherif-elsayed-ali-agg8e
- Loops, built on ActivityPub, introduces Starter Kits. Curated collections of accounts grouped around topics, communities, or vibes. https://blog.joinloops.org/introducing-starter-kits
- Bluesky announces $100 million in Series B funding the week after CEO Jay Graber announced she was stepping down. https://techcrunch.com/2026/03/19/bluesky-announces-100m-series-b-after-ceo-transition
Internet Governance
- Google was granted a patent in January 2026 for a system that evaluates your website in real time and, if it judges the page unlikely to perform well for a specific user, replaces it with an AI-generated alternative assembled from your content, the user's search history, and their account context. https://www.forbes.com/sites/joetoscano1/2026/03/06/google-just-patented-the-end-of-your-website
- Related: Google is using AI to rewrite headlines as part of an “experiment.” https://www.theverge.com/tech/896490/google-replace-news-headlines-in-search-canary-coal-mine-experiment
- The FCC says it has decided that all foreign-made consumer-grade internet routers are henceforth prohibited from receiving FCC authorization and are therefore prohibited from being imported for use or sale in the United States. https://infosec.exchange/@briankrebs/116280575943263005
- The US Senate Commerce Committee marked Section 230's 30th birthday with a hearing on whether the liability shield that built the internet still makes sense, with witnesses including Stanford's Daphne Keller, the Knight Institute's Nadine Farid Johnson, Social Media Victims Law Center attorney Matthew Bergman, and Americans for Responsible Innovation's Brad Carson. https://www.youtube.com/live/F8T5vCmlHmA
- An issue brief from Creative Commons considers how attribution works in the context of AI – where systems generate content based on large amounts of existing data – and why attribution still matters, even when it is difficult to implement. https://creativecommons.org/wp-content/uploads/2026/03/Attribution-and-AI-Outputs-Issue-Brief-Mar-2026.pdf
- Minutes are available from the IETF 125 meeting of the working group developing a standard to let websites signal AI training preferences. https://ietf-wg-aipref.github.io/wg-materials/ietf125/minutes.html
Digital Rights
- A Los Angeles jury found Meta and Alphabet negligent for designing social media platforms that are harmful to young people. https://www.reuters.com/legal/litigation/jury-reaches-verdict-meta-google-trial-social-media-addiction-2026-03-25
- A jury found Meta violated New Mexico law in a case accusing it of failing to warn users about the dangers of its platforms and protect children from sexual predators and ordered the company to pay $375 million in damages. https://edition.cnn.com/2026/03/24/tech/meta-new-mexico-trial-jury-deliberation
- The Global Network Initiative's submission to the UN human rights office warns that age verification, content monitoring, encryption proposals, internet shutdowns, and AI-generated disinformation are creating threats for human rights defenders. https://globalnetworkinitiative.org/gni-submission-ohchr-consultation-on-protecting-human-rights-defenders-in-the-digital-age
- Last minute amendments to the Children’s Wellbeing and Schools Bill will have huge implications for freedom of expression and privacy in the UK, Open Rights Group has warned, raising concerns that MPs and peers are not being given sufficient time to scrutinise the amendments which could have far reaching consequences if abused, and that amendment 38B could give Ministers the powers to force anyone over 13 to use unsafe and unregulated age-ID services to access certain internet services. https://www.openrightsgroup.org/press-releases/13-year-olds-could-be-compelled-to-use-unregulated-age-verification
- An investigation finds the UK government has spent over £2 million on VPN technology, including by Ofcom, Ofsted, the NHS, and multiple MPs expensing consumer VPN subscriptions, while simultaneously consulting on whether to ban or age-restrict children's access to the same technology. https://www.techradar.com/vpn/vpn-privacy-security/investigation-uk-spends-millions-on-vpns-as-government-weighs-ban-for-children
- Thomson Reuters, best known for its media outlet and legal research tools, has a $22.8 million contract to provide an investigative tool to ICE. Its Minnesota employees want that to stop. https://www.nytimes.com/2026/03/11/technology/thomson-reuters-ice-minnesota.html
- As US and Israeli airstrikes continue across Iran, for weeks, the Iranian government has blocked internet access for most of its 92 million citizens. https://www.nytimes.com/2026/03/18/world/middleeast/iran-internet-shutdown.html
Technology for Society
- AI chatbots are the ‘wild west’ for violence against women and girls. Two new studies reveal how artificial intelligence can be used to encourage gender-based violence and sexual abuse. Analysts and academics are working to make Silicon Valley finally pay attention. https://observer.co.uk/news/science-technology/article/ai-chatbots-are-the-wild-west-for-violent-imagery-of-women-and-girls
- Independent publisher Minor Compositions is moving its 70-book catalogue to the Internet Archive. https://www.minorcompositions.info/?p=1863
Privacy and Security
- Researchers fear that Meta's retreat from its commitments to protect user privacy with end-to-end encryption on Instagram chat could create a problematic precedent in big tech. https://www.wired.com/story/the-danger-behind-metas-decision-to-kill-end-to-end-encrypted-instagram-dms
- Related: Mozilla has a petition calling on it to keep encryption for Instagram Messages. https://www.mozillafoundation.org/en/petitions/keep-encryption-for-instagram
- Russia is pushing its Max messenger, an unencrypted “super-app” onto its citizens with a massive promotion campaign and the simultaneous blocking of Whatsapp and Telegram, the country's two most popular messenger apps. https://www.france24.com/en/live-news/20260323-russia-s-max-the-unencrypted-super-app-being-forced-on-citizens
- Researchers from KU Leuven and UGent have broken PhotoDNA, the perceptual hashing system used to detect child sexual abuse material across major platforms, showing it can be bypassed in minutes on a laptop and used to generate false positives that could incriminate innocent users. https://eprint.iacr.org/2026/486
Upcoming Events
- ITU Workshop on Trustable and Interoperable Digital Identities for Human and Agentic AI. March 30-31, Geneva and online. https://www.itu.int/en/ITU-T/Workshops-and-Seminars/2026/0330/Pages/default.aspx
- Palestine Digital Activism Forum (PDAF) 2026 hosted by 7amleh – The Arab Center for the Advancement of Social Media. An amazing speaker lineup! March 30-31, Online. https://events.ringcentral.com/events/pdaf-2026
- Meme-tivism: Rethinking AI's Environmental Impact. A hands-on workshop for AI and ML practitioners exploring meme-making as a tool for climate advocacy and rethinking the environmental footprint of AI systems. April 9, 5:30pm GMT, London, UK. https://luma.com/rkuwsrn6
- IETF AI Preferences Working Group interim meeting, working on a standard to let websites signal whether their content can be used for AI training. April 14-16, Toronto and online. https://ietf-wg-aipref.github.io/wg-materials/interim-26-04/arrangements.html
- Mobilize supporters & build campaign momentum. A one-day online accelerator for NGOs stuck on how to mobilise people, build momentum, and turn ideas into action. April 15, Online. https://www.resource-alliance.org/event/creative-organising-lab
- Take Back Tech 3 is a gathering for organizers, artists, tech workers, academics, lawyers, and more to rally together and strategize our next power-building moves. April 17-19, Atlanta, GA. https://www.takebacktech.com
- SplinterCon returns to host a 0-day event at RightsCon in Lusaka, Zambia. May 5th, Lusaka, Zambia. https://splintercon.net/events/rightscon-lusaka
- Data for Peace Conference, a three-day conference connecting researchers, peacebuilders, policymakers, data providers, humanitarian actors, and peace technologists to strengthen how data and technology supports violence prevention, anticipatory action, and crisis response.n June 15-17. https://www.uu.se/en/department/peace-and-conflict-research/collaboration/data-for-peace-conference
- Summer School in Digital Human Rights at Lund University. Lund, Sweden. June 22-26. https://www.law.lu.se/study/summer-school-digital-human-rights
Careers and Funding Opportunities
- Tech Coalition: Senior Technical Program Manager, Child Safety Tech and Industry Adoption. Washington, DC. https://technologycoalition.org/careers/senior-technical-program-manager
- The Beeck Center for Social Impact + Innovation at Georgetown University: Researcher. Washington, DC. https://beeckcenter.georgetown.edu/jobs/researcher
- Canva: Trust & Safety Operations Specialist. Makati, Philippines. https://www.linkedin.com/jobs/view/4384572638
Opportunities to Get Involved
- Submit an abstract to give a talk at the upcoming 'Rewilding the Web' workshop at the University of Edinburgh. Due by March 31. https://bsky.app/profile/kathrynnave.bsky.social/post/3mhplih5zz22s
- Data for Peace Conference call for proposals due April 13. https://www.uu.se/en/department/peace-and-conflict-research/collaboration/data-for-peace-conference/call-for-proposals
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Comments ()