Concrete Steps towards AI in the Public Interest from Abeba Birhane
Cognitive scientist Abeba Birhane reflects on the Paris AI Action Summit, and critiques the dominant narratives around “public interest AI.” In our main article today, she explores how powerful corporations, militarization, and technosolutionism can stall real progress, and lays out concrete steps to push technology toward the public good.
But first...
Social Web Foundation at RightsCon 2025
In less than a week, the SWF is at RightsCon in Taiwan and hosting some exciting events. Below are several sessions that we are involved in. Please reach out if you’ll be attending and want to meet up!
- How we build a new social web, Tuesday 25 February @ 9 am
SWF will moderate a panel with leaders from Threads, DSNP, WordPress, and Mastodon to discuss open standards, user agency, and the fediverse’s potential as a progressive evolution of social media. - Roundtable with privacy regulators, Wednesday 26 February @ 11:30 am
We’re hosting a conversation with Carly Kind (Australia), Lina Khan (U.S.), Trevor Callaghan (U.K.), and a Canadian regulator on safeguarding data privacy in an era of tech monopolies and global surveillance. - Accountability on the net: building tools for a meaningful multistakeholder cooperation, Thursday 27 February @ 9 am
SWF is invited to speak on building an “Internet Accountability Compass” so governments and companies can be held to their commitments for an open, safe, and rights-respecting internet.
Lastly, Mallory will be at Splintercon, a satellite event. She’ll moderate “Infrastructure, Policy, and the Geopolitics of Internet Fragmentation” and give a talk on censorship circumvention without VPNs.
This Week's Links
Internet Governance
- European Parliament urges MEPs to use Signal for secure messaging, highlighting concerns over encryption and digital security in policymaking. https://www.politico.eu/article/european-parliament-urge-mep-message-signal-encryption/
- Public interest technologist Tara Tarakiyee reflects on the EuroStack initiative and what more is needed to achieve true digital sovereignty without reinforcing global inequalities. https://tarakiyee.com/we-need-more-than-the-eurostack/
- The crypto industry claims it is being ‘debanked,’ but new reports suggest this is a smokescreen for deeper financial regulatory issues. https://www.citationneeded.news/crypto-industrys-debanking-smokescreen/
- The geopolitical stakes of 5G: how global infrastructure decisions shape power dynamics, security concerns, and digital sovereignty. https://pitg.network/news/2025/02/03/geopolitics-5g.html
- End-to-end encryption and interoperability: why ensuring secure, cross-platform communication is crucial for digital rights and governance. https://pitg.network/news/2025/02/04/e2ee-interoperability.html
Digital Rights
- Meta fired its fact-checkers: what happened, and what you can do about it. https://www.colorado.edu/today/2025/01/10/why-meta-fired-its-fact-checkers-and-what-you-can-do-about-it
- Investigative report reveals invasive Israeli software harvesting data from Samsung users in West Asia and North Africa. https://smex.org/invasive-israeli-software-is-harvesting-data-from-samsung-users-in-wana/
- The US government is erasing historical records, disproportionately affecting marginalized communities and their digital archives. https://funcrunch.medium.com/the-u-s-government-is-erasing-our-history-e3be0776ee67
- Trump administration has stopped funding practically all U.S. government work supporting democracy, human rights and press freedom around the globe. https://www.npr.org/2025/02/16/nx-s1-5297844/trump-musk-democracy-usaid-authoritarian-human-rights-funding-freeze
Technology and Society
- Electricity demand rose by 4.3% in 2024, but it isn’t just AI and data centers. Three things to know about electricity in 2025. https://www.technologyreview.com/2025/02/20/1112119/electricity-demand-2025/
- The US Commerce Department is ordering mass layoffs in the AI and semiconductor sectors. https://www.bloomberg.com/news/articles/2025-02-19/commerce-agency-to-order-mass-firing-of-chips-ai-staffers
- Tinder, Hinge, and Match Group face regulatory scrutiny over safety concerns, data privacy, and user protections. https://www.theguardian.com/us-news/2025/feb/13/tinder-hinge-match-investigation
- For those leaving Twitter/X, should they choose Mastodon or Bluesky? More importantly, can competition open the door for scrappy, diverse challengers to disrupt social media giants? https://www.ianbrown.tech/2025/02/18/fleeing-the-hellsite-mastodon-vs-bluesky/
- Recording of the FOSDEM 2025 talk explores how journalists are using the Fediverse for long-form publishing and networked journalism. https://ftp.belnet.be/mirror/FOSDEM/video/2025/ud2208/fosdem-2025-4673-networked-journalism-bringing-long-form-publishing-to-the-fediverse.av1.webm
- Openspeaks Archives launches a new Wikimedia project dedicated to preserving endangered languages and ensuring their digital longevity. https://diff.wikimedia.org/2025/02/18/openspeaks-archives-a-language-archive-for-wikimedia-projects/
- A crowdfunding campaign aims to save Bluestockings, a radical bookstore and activist space fostering community organizing. https://www.gofundme.com/f/qe29bh-save-bluestockings-support-radical-spaces
Upcoming Events
- UC Berkeley School of Information is hosting an online lecture on the challenges and opportunities in making security and privacy practices more inclusive. February 20, 2025 11:15 PST https://www.ischool.berkeley.edu/events/2025/toward-more-inclusive-security-and-privacy-practices-challenges-and-opportunities
- ITU’s SpaceConnect 2025 explores satellite technology’s role in global connectivity, with discussions on policy, regulation, and innovation. February 27, 2025 14:00 CET https://www.itu.int/space-connect/february-2025/
Careers and Funding Opportunities
- Freedom Press is hiring a Senior Digital Security Trainer in Brooklyn, NY. https://freedom.press/careers/job/?gh_jid=4526817005
- Superrr is hiring a Community and Events Manager in Berlin, Germany. https://superrr.net/en/blog/hiring-community-events
- Last chance! Help shape the Internet Exchange community— answer our poll to tell us where you'd like to connect. https://tally.so/r/mDAOqE
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Bending The Arc Of AI Towards The Public Interest
By Abeba Birhane. This article was first published by the AI Accountability Lab and has been adapted for our newsletter.
Following the first in Bletchley Park in 2023 and the second in Seoul in 2024, the third AI Action Summit took place in February 2025 in Paris. In the context of previous Summits, the French AI Action Summit signified progress in the right direction, with a diverse participation pool and focus on several of the most timely issues: 1) Public Interest AI, 2) Future of Work, 3) Innovation and Culture, 4) Trust in AI, and 5) Global AI Governance. Yet, a major gap remained: the Summit overlooked how AI is increasingly entangled with global market concentration, corporate power consolidation and state-backed surveillance and arms race to AI-powered military domination. There was hardly any discussion about geo-politically regressive moves that are undoing decades of progress toward fundamental rights, peace and democracy–enabled by big tech corporations and powerful governments. In the larger context, this silence on the elephant in the room takes us two steps backwards.
I was part of the panel titled “Bending the arc of AI towards the public interest” at the Summit, which focused on the “Public Interest AI” theme. This piece reflects my thoughts on “Public Interest AI” and the potential it holds to meet actual public needs. In order to meet actual public needs, governments, policy makers, and international intergovernmental and philanthropic bodies need to be clear about what both “AI” and “public interest” (not to be confused with "public utility") are and are not.
What AI in the public interest is/isn’t
One of the most common framings–and the biggest sins in this space–is the false dichotomy that presents AI only in terms of “opportunities and risks”. This doctrine sees the world as consisting only of those building AI and thus “unlocking opportunities” versus those uncovering and mitigating risks as "slowing progress". While “innovation”, “advancement” and "economic" gains fit squarely in the former category, anything that calls for responsibility, accountability, and critical thinking tends to be sidelined as outside the purview of “opportunities”, “innovation”, and “advancement”. This hinges on the deeply mistaken assumption that AI technologies are inherently good and that mass adoption inevitably leads to public benefits. In reality, none of this is true. There is nothing that makes AI systems inherently good. Without intentional rectification and proper guardrails, AI often leads to surveillance, manipulation, inequity and erosion of fundamental rights and human agency while concentrating power, wealth and influence in the hands of AI developers and vendors.
In the context of this framing, “public interest AI” often comes down to three broad and overlapping forms. First, it involves the idea of equipping public institutions–such as academic universities and civil society organisations–with the resources to build their own versions of AI systems, often replicating existing industry trends, such as training large language models for European languages; Second is ”AI for good” type of initiatives that focus on leveraging AI to “solve” numerous social, cultural, historical and political issues such as tackling the UN’s Sustainable Development Goals (SDGs) using AI. Setting aside whether these efforts actually “work,” deploying AI in complex socio-political systems often amounts to technosolutionism creating the illusion of progress. Finally, public interest AI is sometimes framed as improving existing systems–making them less biased, more representative, and better suited for “everyone”, which sounds great in theory. Some of the initiatives in this category focus on curating good quality training data, based on the assumption that better data leads to improved models. This is true to some extent. However, without a clear vision grounded in rigorous audits and empirical evidence, and geared towards shifting power, agency, and benefits from tech oligopolies towards the public, such initiatives can end up producing “more accurate” and “better-performing” models that power surveillance, aid genocide, or revive pseudoscience. In the larger scale of things, this is simplistic. Generative models, for example, encode racism and stereotypes in nuanced ways, even with Reinforcement Learning from Human Feedback (RLHF). And improving outputs doesn’t reveal real-world harms—it can istead obscure them, giving the illusion of a solution. Large-scale, high quality, and representative health data in the hands of tech oligopolies or profit-first AI companies, for example, can still result in massive benefit for these companies and no meaningful benefit or even harm to the public.
At the core of these approaches to “public interest AI” is the common goal: giving the “public” more AI or feeding existing corporate models with more or ”better” data—mitigate numerous issues that arise from technology with more technology. Contrary to this belief, AI technologies—whether in the hands of big tech or civil society—are inherently conservative, exploitative, and inextricably entangled with capitalism.
Although it might be a mistake to cast all of these initiatives as unproductive and unhelpful, they fall largely within the technosolutionist paradigm–the belief that all or most problems can be solved through technology. This approach is unlikely to bring about any meaningful change to the public, as these techno-centric solutions are never developed outside of corporate silos. Any meaningful intervention that aims to centre the interest of the “public” needs to go beyond aligning with the “innovation” narrative.
What does the public want?
It would be difficult, if not impossible, to arrive at a clear consensus on what the public wants, how best it should be represented, or even who the “public” is. However, it would be even more absurd to assume that government interests or corporate practices align with those of the public, or that these bodies hold useful insight into the public needs and interests. If there was a shred of doubt that the tech industry might be interested in anything other than amassing unprecedented power, control, influence, and revenue at any cost, recent events should put that to rest. In the wake of politically fuelled fascism, it has become crystal clear that slogans like “benefiting humanity” are nothing but a facade and empty corporate PR. Big tech corporations along with large social media and AI companies—including Google, Microsoft, Amazon, Meta, and OpenAI—are bending backwards to align with fascism. One by one, they have willingly and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, and refrain from using AI for weapons development.
Centering the vast interest in AI around the public is indeed one way to bring some balance, as it allows us to recognise the massive concentration of technological and market power. This growing imbalance is increasingly at odds with democratic and social structures, moving us away from a public-driven approach to techno-feudalism. But what are the most effective and productive ways to do this?
Parsing through the “About Us” pages of notable civil society organisations working at the intersection of AI technologies and public interest (e.g., Ada Lovelace Institute, Irish Council for Civil Liberties (ICCL), Data & Society, The AI Now Institute, Access Now, Research ICT Africa) makes it clear that none advocate for approaches that simply ask for “more AI to fix problems caused by AI”. In other words, the “more AI to mitigate existing issues from AI” narrative is squarely outside their purview. Despite the diversity and multiplicity in their missions and objectives, most civil society organisations have overlapping interests: defending and extending fundamental rights, justice, equity, and human dignity by tackling root causes such as power imbalances, resource asymmetry, corporate monopolies, and other structural obstacles.
It should be clear by now that technology in the public interest is not about adding more AI or equipping the public, civil society and research institutes with more compute, data and other resources to replicate corporate AI trends. And neither is generative AI a “magical solution” to a multitude of problems as the tech industry has sold it. Instead, these are problems tech companies have dumped on us. Meaningfully assessing– evaluating the merits, usefulness, and responsible development of AI through regulation and cleaning up the mess from the aftermath has become a task for policy makers, academics, and civil society and rights groups. In this, the public has little agency, power, or influence over the technology that is pushed down their throats or a meaningful mechanism to opt-out. At the same time, we have been experiencing unprecedented technological change over the past couple of decades, a pace that has also normalised a somewhat fabricated fear that the public is being left behind by the "new thing".
In light of all of this, advancing technology in the public interest requires us to work across many forums and dimensions to address the multitude of problems and challenges from every possible angle. Below are some concrete examples that embody this approach from my perspective. Genuine intergovernmental, policy-making, and funding efforts aimed at fostering and supporting a public-interest technology ecosystem in the public interest might consider these and similar initiatives. This list is by no means comprehensive or representative but reflects my familiarity, domain expertise, experience and positionality.
Concrete steps forward:
- Accountability
- Through nurturing independent and reliable knowledge ecosystems that feed into AI literacy and realistic public understanding of AI.
Current AI technologies have captured the public not because these systems are reliable, necessarily useful, or beneficial to the public, but because the tech industry holds a monopoly over the public narrative and controls media and information. This has created a massive gap between public perception and the actual capabilities of AI systems both as 1) consumer products, which often fail to meet their promises and 2) as scientific research, which frequently fall short of basic scientific principles such as openness, a prerequisite to meaningful independent evaluation, reproducibility and replicability.
Coordinated, cross-sector efforts that embrace the principle “critique is service” (the same way critique, for instance via the peer-review process strengthens and “steel-man” scientific work–can reduce hype, counter mis/dis-information and shift control of AI narratives from industry players to investigative journalists, academics, civil society and other public interest expert groups.
Initiatives that train the next generation of tech journalists (such as the Pulitzer Centre for journalism) and organisations at the forefront of investigative AI reporting (e.g., ProPublica, The Markup, 404 Media) are essential in auditing AI technologies, and disseminating knowledge. In the face of corporate dominance over AI narratives and the information ecosystem, initiatives like these keep the lights on. - Tools of accountability via audits and evaluation.
Holding powerful actors answerable for the systems they chose to develop and deploy via rigours, independent audits grounded in theories of change (see here and here for detailed strategies and a roadmap for fostering an ecosystem of independent third-party audits) .- Rigorous testing and examination of AI artefacts–and their downstream impacts on individuals, groups and society– are essential for understanding, diagnosing, and surfacing harms, risks, and rights violations, and thus a key to meaningful mitigations and changes.
- Audit repositories and databases should make audit results accessible to accountability actors, ensuring they feed directly into a process of accountability.
- Tools of transparency.
Transparency and open source/access are prerequisites not only for audits but also for fostering larger accountability ecosystems. Below is an initial list of some required information necessary for shedding light on the practices of developers, vendors and institutions using AI in public services. This list is by no means comprehensive or representative and each by itself may not be transformative. But together they serve as a prerequisites for meaningful accountability efforts.- Government AI inventory databases
- Data on procurement practices - including details on AI funding, tendering processes and, assurances that AI systems are fit for purpose when procured
- AI Incident databases
- Data-visualisation, documentation, and storytelling - highlighting harmful AI practices (for example the Anti-Eviction Mapping Project and Surveillance Watch)
- Benchmarking results and impact assessments
- Transparency indexes and reports
- Transparency databases - this might include data on:
- Data centre operations and infrastructure
- Energy, water, and other resources consumption
- Break down of energy use for training vs inference
- Carbon emissions
- Government and military contracts
- Government use of AI in defense and surveillance
- Lobbying spending and political donations
- Through nurturing independent and reliable knowledge ecosystems that feed into AI literacy and realistic public understanding of AI.
- Counter power-measures
- Tools of resistance
Resistance, when rooted in indepth knowledge and a realistic and practical understanding of AI systems, allows the public to both participate in these impactful systems, and also to challenge and reject them. Rejecting from a place of knowledge preempts the lazy dismissal that they “don’t understand the tech”.
Alongside the “Tools of transparency” mentioned above, here are some examples of tools of resistance, as starting points, though not exhaustive or comprehensive).- Support a thriving community of technologists and researchers - Developing non-conventional tools of resistance and refusal (for example the Glaze Project, data poisoning tools)
- Missing data - This can be further categorised into:
- Public service data - Information that may not have direct monetary value, such as climate data, pollution levels, wildlife tracking, and crisis data (e.g.disaster response)
- Data for transparency - see Tools of transparency above.
- Local initiatives developing technologies based on “nothing about us without us” principles - Examples include Maori language technologies and Kabakoo Academies
- Unions, safe-harbors, and coordinated conveinings play a crucial role in collective action. Unions have historically been powerful in pushing back on harmful AI use, especially in the workplace, where they have driven concrete improvements in working conditions. Spaces and convenings are essential for coordination, organising, and forming allyships amongst all the efforts and initiatives mentioned in this piece (and more) as power comes from standing together. Examples of collective action initiatives include:
- Tools of resistance
- African data workers’ unions
- UNI Global Union
- Worker-owned cooperative structures - for companies, and worker-owned IP (like Karya)
- Data and technology as public commons - along with institutions for safeguarding and cultivating these commons
- Legal actions (for example Foxglove)
My deepest gratitude to those, especially underfunded and underappreciated scholars, journalists, technologists, and civil society groups tirelessly working towards more equitable technological futures. I am grateful for the various conversations I took part in over the duration of the Summit in Paris (as well as continual conversations with friends and allies in this space). These have enriched this piece. Special thanks to Thomas Laurent, Jackie Key, Janet Haven, Deb Raji, Udbhav Tiwari, Harshvardhan Pandit and Nabiha Syed for feedback on an earlier version of the draft.