The UK Struggles to Balance AI Innovation and Creative Protection
The UK, a global hub for both AI and the arts, struggles to balance tech innovation and protecting a creative sector increasingly threatened by AI trained on copyrighted works.
By Audrey Hingle, also published in techpolicy.press.
Perhaps nowhere is the tension between content creators and generative AI technologies playing out more acutely than in the United Kingdom. Home to a world-renowned creative sector that contributes £126 billion per year to the economy —roughly 5.7% of GDP — and supports 2.4 million jobs, the UK has long punched above its weight in music, publishing, design, film, advertising, and gaming. These industries rely on robust intellectual property protections and human creativity, making them especially vulnerable to AI systems that are trained on vast amounts of copyrighted content, often without permission or compensation.
At the same time, the UK sees leadership in AI as essential to its economic future. The tech sector contributes over £150 billion to the economy and employs 1.7 million people. Policymakers are positioning AI as a cornerstone of digital innovation, foreign investment, and economic growth. But the UK faces a strategic bind: it is difficult to compete with the scale and dominance of US and Chinese tech giants. This puts added pressure on the government to make the UK an attractive destination for AI development. The recent passage of the Data Use and Access Bill without an artist-backed amendment requiring disclosure of copyrighted training data and the now-completed consultation on AI and copyright highlight the ongoing challenge of balancing tech-sector growth with protections for the creative industries.
Recent Policy Flashpoints
The Data Use and Access Bill (June 2025)
The Data Use and Access Bill was introduced to modernize the UK’s data infrastructure and encourage innovation across sectors, including AI. During its passage through Parliament, an amendment championed by artists and Baroness Beeban Kidron proposed requiring AI developers to disclose whether copyrighted materials were used in training datasets. Supporters saw this as a basic measure of transparency and accountability. Opponents, including industry lobbyists, argued it would deter AI investment by creating regulatory uncertainty and compliance burdens. Sir Nick Clegg, former president of global affairs at Meta, argued that asking permission from all copyright holders would "kill the AI industry in this country." The amendment was ultimately excluded from the final legislation, a decision that many in the creative sector interpreted as the government siding with tech firms over creators.
The Copyright and AI Consultation (Dec 2024–May 2025)
In parallel, the UK government conducted a consultation to clarify how copyright law applies to AI training. The government consultation proposes allowing AI models to be trained on copyrighted material unless its individual owners choose to opt out. Developers must offer transparency and creators have a way to opt out or license their work:
The consultation also proposes new requirements for AI model developers to be more transparent about their model training datasets and how they are obtained. For example, AI developers could be required to provide more information about what content they have used to train their models. This would enable rights holders to understand when and how their content has been used in training AI.
Many creators argued that this placed the burden of enforcement on individuals, rather than establishing a meaningful consent framework. The consultation attracted widespread public interest, including a silent protest album released by more than 1,000 musicians as a symbolic rejection of the proposals. While the government framed the consultation as an effort to balance competing needs, critics saw it as further evidence that AI growth is being prioritized over creative rights.
What Makes This So Difficult: Competing Pressures and Real Trade-Offs
The UK is caught between two strategic priorities that increasingly pull in opposite directions. On one side is the pressure to stay competitive in a global AI race, where firms choose jurisdictions based on regulatory clarity, access to data, and innovation-friendly policies. On the other is a creative sector that generates significant economic value and global cultural influence, but whose future depends on enforceable rights and control over how creative work is used.
Copyright law was not built for machine learning. Terms like “fair use,” “opt-out,” and “transparency” lack consensus and are hard to enforce across borders. With the EU and US pursuing diverging approaches, the UK lacks a clear model to follow. Decisions made now will set precedents not just for who benefits from AI, but for whose work is valued and protected.
Where This Might Go Next
The government is expected to respond to the consultation later this year, a decision that could shape future regulation or guidance around AI training data. Some experts are calling for an opt-in system that centres consent, rather than the burden of opting out. Others (most vocally Nick Clegg) warn that anything more restrictive than the current approach could stifle AI innovation in the UK.
The UK has a decision to make: whether to prioritize tech growth at all costs, or to build a more balanced framework that values creativity as much as computation. This could be a chance to lead internationally by developing a workable licensing framework that protects rights holders and provides legal certainty for developers. It will take policy creativity to move beyond the binary of Big Tech vs Big Content, but getting it right could set a meaningful precedent for the future.
Mallory on The PVT Show! 🎙️
IX’s Mallory Knodel joined leadership and mindset coach Poonam Vijay Thakkar to talk about AI, public interest technology, encryption, and internet governance.
They discussed how to set rules for AI, why collaboration matters, and what young people need to know about tech and human rights. Watch below on YouTube, or listen on Apple Podcasts, Spotify, Amazon, Pocketcast or wherever you get your podcasts.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Internet Governance
- In a landmark decision in Bartz v. Anthropic, Judge William Alsup ruled that while using copyrighted works to train AI models like Claude is fair use, Anthropic’s acquisition and indefinite storage of pirated books from shadow libraries to build a central library is copyright infringement. https://chatgptiseatingtheworld.com/2025/06/24/judge-alsup-grants-partial-summary-judgment-to-anthropic-ruling-training-copies-were-fair-use-but-judge-rules-no-fair-use-in-pirated-copies-of-books-used-to-build-a-central-library-they-are-infring
- This paper examines WITNESS’s participation in the Coalition for Content Provenance and Authenticity (C2PA) to explore how human rights may be meaningfully embedded in technical standard-setting processes. https://www.gen-ai.witness.org/wp-content/uploads/2025/06/Human-Rights-In-Standards.pdf
- Despite Elon Musk’s pledges to prioritize child safety on X (formerly Twitter), the platform is now overrun by automated accounts using hashtags and Communities features to openly sell child sexual abuse material (CSAM). https://www.nbcnews.com/tech/tech-news/x-accounts-peddle-child-abuse-musk-material-thorn-cuts-ties-rcna212107
- The internet is undergoing a seismic transformation. Once hailed as a force for openness and connection, it is increasingly shaped—and constrained—by a handful of powerful actors. https://www.newamerica.org/the-thread/how-to-rebuild-the-internet
- WhatsApp plans to roll out a new advertising model in the coming months, but it won't affect the EU until next year. https://www.politico.eu/article/whatsapp-meta-ads-eu-facebook-instagram-2026
- Censys reports on the near-total internet blackout in Iran starting June 18, 2025, identifying dramatic drops in host visibility across major ISPs. https://censys.com/blog/irans-internet-a-censys-perspective
- As global internet governance reaches a critical crossroads, the Global Digital Justice Forum is rallying civil society to reclaim the digital future—launching the #DigitalJusticeNow campaign to demand inclusive, rights-based digital governance rooted in the experiences of the Global South. https://www.apc.org/en/news/global-digital-justice-forum-building-more-inclusive-digital-future-together
- In sub-Saharan Africa, internet repression is going legal. New research shows that governments are quietly passing restrictive laws, often framed as cybersecurity or privacy protections, to curb digital rights before dissent even begins. https://www.tandfonline.com/doi/full/10.1080/13510347.2025.2503370#abstract
- The United States announces support for the re-election of Doreen Bogdan-Martin as Secretary-General for the International Telecommunication Union (ITU). https://www.state.gov/releases/office-of-the-spokesperson/2025/06/u-s-support-for-the-re-election-of-doreen-bogdan-martin-as-itu-secretary-general
- Freedom House's special report finds that governments around the world are increasingly exerting control over the technology that people depend on to access the free and open internet. https://freedomhouse.org/report/special-report/2025/tunnel-vision-anti-censorship-tools-end-end-encryption-and-fight-free
- Brian J. Chen and Serena Oduro argue that banning state-level AI regulation would stifle innovation, weaken protections, and deepen tech monopolies, ultimately harming US democracy. https://datasociety.net/library/theres-no-reason-to-ban-state-ai-regulation
- Brian Merchant on Sam Altman’s “gentle singularity” as a PR smokescreen for OpenAI’s growing ties to authoritarian regimes, military contracts, fossil fuels, and efforts to block AI regulation. https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity
- A top EU court adviser has recommended upholding a €4.1 billion antitrust fine against Google for abusing Android’s dominance. https://apnews.com/article/google-android-european-union-antitrust-32cefb67817bce21341cbc81dd13e012
Digital Rights
- This Feminist Trade Agenda presents 16 policy proposals aimed at advancing the Gender and Trade Coalition’s efforts toward feminist trade justice. https://gendertradecoalition.org/publications/feminist-trade-agenda
- Neda Atanasoski and Nassim Parvin explore “creepiness” as a feminist lens, revealing how emerging technologies quietly reshape intimacy, power, and politics beyond what can be seen. https://bookshop.org/p/books/technocreep-and-the-politics-of-things-not-seen-neda-atanasoski/21674174?aid=112451&ean=9781478031253&listref=digital-rights
- Smartphones are once again setting the agenda for justice. Across the United States, Latino organizers are raising their phones, not to go viral but to go on record. https://theconversation.com/smartphones-are-once-again-setting-the-agenda-for-justice-as-the-latino-community-documents-ice-actions-258980
- At UN Open Source Week, the Sovereign Tech Agency brought together 12 open source experts to highlight the vital role of maintainers in securing and sustaining global digital infrastructure. https://www.sovereign.tech/news/maintainer-delegation-un-open-source-week
Technology for Society
- Mastodon was added to the Digital Public Goods Alliance’s DPG Registry. https://blog.joinmastodon.org/2025/06/mastodon-dpga
- A queer online zine called New Session is using the retro Telnet protocol to challenge modern internet norms. https://www.404media.co/queer-online-zine-new-session-telnet
- AI isn’t just impacting how we write — it’s changing how we speak and interact with others. (Not that you needed help sounding articulate, obviously). https://www.theverge.com/openai/686748/chatgpt-linguistic-impact-common-word-usage
- Eryk Salvaggio debunks the AI industry's overuse of the "black box" metaphor, arguing that it obscures the real issue: not mystery, but deliberate opacity. https://www.techpolicy.press/the-black-box-myth-what-the-industry-pretends-not-to-know-about-ai
- What can a Latin American philosophy of symbolic resistance teach us about AI and creativity? Grounded in the Antropofagia tradition, this paper explores how Ecuadorean artists confront and reinterpret image-generating AI through culturally rooted, collective, and resistant creative practices. https://journals.sagepub.com/doi/10.1177/13678779251346959
- Silicon Valley executives from Meta, OpenAI and more are joining a new innovation corps in Army Reserve. https://www.wsj.com/tech/army-reserve-tech-executives-meta-palantir-796f5360
- Inside the first big UN Wikipedia edit-a-thon: Expanding tech policy knowledge. https://diff.wikimedia.org/2025/06/18/inside-the-first-big-un-wikipedia-edit-a-thon-expanding-tech-policy-knowledge
- Aaron Benanav on why Artificial Intelligence isn’t going to change the world. It just makes work worse. https://www.versobooks.com/en-gb/blogs/news/is-the-ai-bubble-about-to-burst
- The EU’s “twin transition” promises synergy between digital innovation and sustainability, but a closer look reveals how digital solutionism is quietly reshaping environmental governance, often at the expense of truly green outcomes. https://www.tandfonline.com/doi/full/10.1080/1523908X.2025.2515225
- Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives. https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
- A new Lower East Side venue called Canyon, led by a museum veteran and a financier, aims to attract younger audiences by focusing on video, audio, and performance art. https://www.nytimes.com/2025/06/13/arts/design/new-york-video-art.html
- 101 guide on fundraising for small grassroots organizations. https://drive.google.com/file/d/1exHBiaj95mFbzHW2VPFGdC3lMK5Sy-n-/view
- Do you really have to subscribe to your friend’s newsletter to prove you care, or can friendship stay free of monthly fees? (If you're my friend and you're reading this, yes.) https://www.nytimes.com/2025/06/25/magazine/substack-newsletter-ethics.html
Privacy and Security
- The Markup caught healthcare exchanges in Nevada, Maine, Massachusetts and Rhode Island shared users’ sensitive health data with companies like Google and LinkedIn. https://themarkup.org/pixel-hunt/2025/06/17/we-caught-4-more-states-sharing-personal-health-data-with-big-tech
- Israel says Iranian hackers have been hijacking private home security cameras across Israel to gather real-time visual intelligence. https://www.bloomberg.com/news/articles/2025-06-20/iran-hijacking-home-security-cameras-to-spy-within-israel
- Austria's coalition government has agreed on a plan to enable police to monitor suspects' secure messaging. https://www.reuters.com/world/austrian-government-agrees-plan-allow-monitoring-secure-messaging-2025-06-18
- Authors Akshra Mehla and Lakshay Mehla critique the rollout of family ID cards across several Indian states, arguing they are tools of state surveillance rather than benign welfare measures. https://www.researchgate.net/publication/392089145_Identity_versus_Identification_Surveillance_through_Family_ID
- A sophisticated phishing attack linked to Russian state actors tricked a prominent Russia expert into handing over app-specific passwords, bypassing MFA and exposing a new front in targeted social engineering. https://citizenlab.ca/2025/06/russian-government-linked-social-engineering-targets-app-specific-passwords
- As Canada’s immigration system increasingly relies on AI, a new case study raises alarm over opaque decision-making and the ethical risks of using facial recognition and automated tools in life-altering refugee and immigration cases. https://balsilliecases.ca/case-study/the-challenges-of-accountability-in-ai-for-immigration-the-ircc-and-canadian-ai-governance
- The European Commission plans to build its own decryption tools for law enforcement by 2030, drawing criticism from digital-rights groups who warn that the initiative poses a serious threat to privacy. https://www.thestack.technology/eu-police-decryption-blatant-attack-privacy
- Tiffany C. Li argues that the real solution to disinformation isn’t speech or economic regulation, but privacy law which targets the algorithmic personalization engines that drive today’s digital propaganda. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5318708
Upcoming Events
- Join the International Telecommunication Union for an in-depth examination of how space-based technologies are revolutionizing humanitarian response and emergency management worldwide, or watch the recording. June 26, 1:30pm BST. https://www.youtube.com/live/wYOrE9YIv28
- The Future of Hacking: A Discussion with Dr. Laura Scherling from All Tech is Human. Jul 1, 6pm BST. Zoom. https://events.zoom.us/ev/ApqL0SmpLii0p_rE4T80Ts5LUfgaZW5eNzI7CLXvrr_VggHcRp4g~AnEeAmz25SFzSOMLqC8j2B1Si-Abj88gMNkKtKsWrNumguzaSfeLGP1glA
- DPI in Conversation: A week-long event dedicated to understanding Digital Public Infrastructure (DPI) and the advancement of India’s digital governance agenda. July 7-11, 6pm IST. Online. https://form.sflc.in/dpi-in-conversation
- Cory Doctorow in conversation with Maria Farrel to talk about his writing, surveillance capitalism, the ‘enshittification’ of digital platforms, and how we can fight back against Big Tech monopolies that dominate our lives. July 16, 6pm BST. Zoom. https://www.openrightsgroup.org/events/org-at-20-cory-doctorow-in-conversation-with-maria-farrell
- Privacy Law Scholars Conference (PLSC) Europe is a conference for stimulating work in progress. October 23-34. Leiden, Netherlands. https://www.universiteitleiden.nl/en/events/2025/10/plsc-europe
Careers and Funding Opportunities
United States
- Civic AI Security Program (CivAI): Program Associate. Berkeley, CA. https://civai.org/jobs/program-associate
- AI Futures Project: Policy Researcher. Berkeley, CA or Washington, DC. https://docs.google.com/document/u/0/d/1r5YKOUi6gMUZoZvAEh0tHbbHM9RKYk6ONEkw_Tpizb0/mobilebasic
- Anthropic: International Policy Special Projects Lead. San Francisco, CA. https://job-boards.greenhouse.io/anthropic/jobs/4746738008
- EFF: Legal Internship. San Francisco, CA. https://www.eff.org/about/opportunities/interns
- Meta: Privacy and Data Policy Manager. Washington, DC. https://www.metacareers.com/jobs/696822696041627
- Chamber of Progress: Senior Director, Technology Policy. Washington, DC. https://www.linkedin.com/jobs/view/4253023801
- Information Technology Industry Council: Director of Government Affairs. Washington, DC. https://www.itic.org/about/careers-at-iti?gni=8a7883a896d0a1610196d52e3ace2b8b&gnk=job&gns=LinkedIn%2BLimited
- Third Way: Director of Technology Policy. Washington, DC. https://www.linkedin.com/jobs/view/4247471925
- German Marshall Fund of the United States: Summer 2025 Trainee - GMF Tech. Washington, DC. https://www.linkedin.com/jobs/view/4223900099
- The Foundation for Defense of Democracies: Cyber & Technology Innovation Research Fall 2025 Internship. Washington, DC. https://fdd.applicantpro.com/jobs/3783827
- Council on Foreign Relations: Research Associate, Technology and National Security. Washington, DC. https://careers-cfr.icims.com/jobs/2725/research-associate,-technology-and-national-security/job
- The Center for Strategic and International Studies: Deputy Director and Senior Fellow - Strategic Technologies Program. Washington, DC. https://jobs.silkroad.com/CSIS/Careers/jobs/1635/
- Tech Congress: The Congressional Innovation Fellowship. Washington, DC. https://techcongress.io/apply
- Omidyar Network: Principal, Programs (Technology Policy). Washington, DC. https://job-boards.greenhouse.io/omidyarnetwork/jobs/6952324?gh_src=dOMAfS
- EqualAI: Multiple roles including Director of Policy & Programs, Law Fellow, Interns. Washington, DC. https://www.equalai.org/join-us/careers/
- Carbon180: Director of Policy- Tech. Washington, DC. https://apply.workable.com/carbon180/j/006449689C
- The IEEE Standards Association: Technology Policy Analyst. Piscataway, NJ. https://ieee.taleo.net/careersection/2/jobdetail.ftl?job=250189&tz=GMT-04%3A00&tzname=America%2FNew_York
- Princeton University Center for Information Technology Policy (CITP): Associate Professional Specialist. Princeton, NJ. https://puwebp.princeton.edu/AcadHire/apply/application.xhtml?listingId=39082
- ACLU. New York, NY.
- Special Counsel for Technology and Data Privacy. https://www.aclu.org/careers/apply/?job=7986644002&type=national
- Director, Privacy & Data Governance. https://www.aclu.org/careers/apply/?job=7915268002&type=national
- MITRE: Electromagnetic Spectrum Enterprise Policy & Programs Lead. McLean or Arlington, VA. https://careers.mitre.org/us/en/job/MITRUSR115278EXTERNALENUS/Electromagnetic-Spectrum-Enterprise-Policy-Programs-Lead
- Gates Foundation: Deputy Director, Legal (Global Policy & Advocacy and Communications). Seattle, WA. https://gatesfoundation.wd1.myworkdayjobs.com/Gates/job/Seattle-WA/Deputy-Director--Legal--Global-Policy---Advocacy-and-Communications-_B021030-1
- Adobe: Governance and Policy Manager, Security. San Jose, CA. Austin, TX. Lehi, UT. Seattle, WA. New York, NY. https://careers.adobe.com/us/en/job/ADOBUSR156663EXTERNALENUS/Governance-and-Policy-Manager-Security
- Tech Coalition: Program Manager, Lantern. Remote US. https://www.technologycoalition.org/careers/program-manager-lantern
- Rand. Remote US.
- Technical AI Policy Associate. https://rand.wd5.myworkdayjobs.com/External_Career_Site/job/Washington-DC-DC-Metro-Area/Technical-AI-Policy-Associate_R3217-1
- AI Policy Research Associate. https://rand.wd5.myworkdayjobs.com/External_Career_Site/job/Washington-DC-DC-Metro-Area/AI-Policy-Research-Associate_R3264
- The National Center for the Advancement of Semiconductor Technology (Natcast): Tech Transactions Counsel. Remote US. https://jobs.ashbyhq.com/natcast/1d9ce8e6-227a-43fb-90fa-2e108aa7d23b
Global
- PayPal: Head of EU Government Relations. Brussels, Belgium. https://paypal.eightfold.ai/careers/job?domain=paypal.com&pid=274906324357&query=R0125743&domain=paypal.com&sort_by=relevance&job_index=0
- Forefront Advisers: Associate Director / Director – Emerging Technology Policy. Brussels, Belgium. https://www.linkedin.com/jobs/view/4245361797
- Meta: Public Policy Manager, EU Affairs (12 month contract). Brussels, Belgium. https://www.metacareers.com/jobs/474214662347383
- The Convergence Foundation: Program Manager - Policy. New Delhi, India. https://www.careers-page.com/the-convergence-foundation/job/L58R688R
- Metagov: Public AI Fellow (Japan). Tokyo, Japan. https://metagov.org/join/jobs/public-ai-fellow-japan
- ShieldAI: Director of Government Relations, Europe. Oslo, Norway. https://www.linkedin.com/jobs/view/4223956921
- TikTok: Global Head of Policy Development - Trust and Safety. Singapore. https://lifeattiktok.com/search/7506329299432900871
- Singapore Government: Singapore.
- Policy Planning, Assistant Manager/Manager (Network Project Office). https://sggovterp.wd102.myworkdayjobs.com/en-US/PublicServiceCareers/job/IMD---Mapletree-Business-City-MBC-BLK-10/Policy-Planning--Assistant-Manager-Manager--Network-Project-Office-_JR-10000028572
- Policy Ops Manager, Online Safety. https://sggovterp.wd102.myworkdayjobs.com/en-US/PublicServiceCareers/job/IMD---Mapletree-Business-City-MBC-BLK-10/Policy-Ops-Manager--Online-Safety_JR-10000039476
- Deputy Director, Tech Policy. https://sggovterp.wd102.myworkdayjobs.com/en-US/PublicServiceCareers/job/IMD---Mapletree-Business-City-MBC-BLK-10/Deputy-Director--Tech-Policy_JR-10000033496
- Assistant Director, Tech Policy. https://sggovterp.wd102.myworkdayjobs.com/en-US/PublicServiceCareers/job/Singapore/Assistant-Director--Tech-Policy_JR-10000032497
- Senior Manager / Manager, Product Policy (Singpass). https://sggovterp.wd102.myworkdayjobs.com/en-US/PublicServiceCareers/job/Singapore/Senior-Manager---Manager--Product-Policy--Singpass-_JR-10000039809
- AI Security Institute: Criminal Misuse Workstream Lead. London, UK. https://job-boards.eu.greenhouse.io/aisi/jobs/4399317101?gh_src=CB7g8k
- Privacy International: Tech Advocacy Officer. London, UK. https://privacyinternational.org/opportunities/5594/tech-advocacy-officer
- UK Department for Science, Innovation & Technology: Head of Legislation, Secure Technologies Team. Birmingham, Cardiff, Darlington, Edinburgh, London, or Salford, UK. https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1954982
- ActiveFence. Remote UK.
- Violent Extremism Researcher. https://www.activefence.com/careers/?comeet_pos=3B.552
- Child Safety Researcher. https://www.activefence.com/careers/?comeet_pos=88.151
- Wikimedia: Senior Trust & Safety Policy Manager. Remote. https://job-boards.greenhouse.io/wikimedia/jobs/6937753?gh_src=94cfded01us
- The Safe AI Forum: Fellowship. Remote. https://saif.org/opportunities/research-fellow-senior-research-fellow-special-projects-fellow/
- The Surveillance Technology Oversight Project (S.T.O.P.): Executive Director. Remote. https://www.stopspying.org/staff-openings
- Open Briefing: Director of digital and information security. Remote. https://openbriefing.org/blog/job-director-of-digital-and-information-security
- Lantern: User Support, Engagement and Partnerships Specialist (Iran, Part-Time). Remote. https://lantern.io/careers/user-support-partnerships-specialist
Opportunities to Get Involved
- Bread&Net 2025: call for proposals now open! Bread&Net is a regional forum on digital rights and social justice in West Asia and North Africa, and sessions are on everything from AI and surveillance to movement-building, wellbeing, and public interest tech. Submissions due July 15. https://breadandnet.secure-platform.com/site
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.