AI, Power and Governance
This week, in our main story, Internet Exchange's Mallory Knodel examines how artificial intelligence is reshaping governance, arguing that while AI promises efficiency, it often deepens societal inequalities. She explores the risks of AI reinforcing the power of the privileged, while diminishing civil liberties for others, and calls for more inclusive, multistakeholder approaches to ensure technology serves the public good.
But first...
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Internet Governance
- The DOJ has disbanded its crypto crimes unit under Trump’s pro-crypto push, shifting focus to prosecuting individual fraud while backing off enforcement against exchanges and mixers. https://fortune.com/crypto/2025/04/08/doj-ncet-disbands-memo-todd-blanche-trump
- 21 nations have signed on to the Pall Mall Process Code of Practice for States, a voluntary agreement that outlines good practices for the responsible use, regulation, and development of commercial cyber intrusion capabilities (CCICs). Read it: https://www.gov.uk/government/publications/the-pall-mall-process-code-of-practice-for-states/the-pall-mall-process-code-of-practice-for-states#section-2-voluntary-good-practice-for-states
- AFRINIC’s governance crisis has triggered a court-appointed intervention, raising concerns about internet stability in Africa and the resilience of global internet governance. https://www.icann.org/en/announcements/details/icann-update-on-afrinic-receiver-appointment-09-03-2025-en
- Matt Blaze, McDevitt Professor of Computer Science and Law at Georgetown, testifies in House on how Salt Typhoon was enabled by telecom wiretap mandates dating back to the Communications Assistance for Law Enforcement Act of 1994. https://federate.social/@mattblaze/114275146367312386
- A new report commissioned by DACS and PICSEL, proposes how the UK government could tackle the ongoing challenges around copyright and generative AI. https://www.dacs.org.uk/news-events/considering-creatives-copyright-and-ai
- Civil society orgs, companies and cybersecurity experts urge the Swedish Riksdag to reject a bill that would weaken encryption and endanger public security. https://www.globalencryption.org/2025/04/joint-letter-on-swedish-data-storage-and-access-to-electronic-information-legislation
- The UN adopted a resolution to better protect human rights defenders from risks posed by emerging technologies. https://ishr.ch/latest-updates/hrc58-states-adopt-substantive-resolution-on-human-rights-defenders-emerging-technologies
- Sen. Ted Cruz is probing the Future of Privacy Forum for allegedly pushing “left-wing AI laws” into federal regulation. https://thetexan.news/federal/sen-ted-cruz-letter-probes-future-of-privacy-forum-over-ai-regulation-advocacy/article_bed59d57-6c9d-44f2-b3e2-57a54758f5cf.html Outside of the US? Read it: https://archive.is/J2Pyi#selection-2231.0-2239.124
- Mapping Tech Design Regulation in the Global South explores how design incentives shape digital regulation in the Global South, highlighting regional differences, current trends, and barriers to accountability. https://toda.org/policy-briefs-and-resources/policy-briefs/mapping-tech-design-regulation-in-the-global-south.html
Digital Rights
- Thousands of newly obtained documents show that Clearview AI’s founders always intended to target immigrants and the political left. Now their digital dragnet is being used by ICE and the FBI under the Trump administration. https://www.motherjones.com/politics/2025/04/clearview-ai-immigration-ice-fbi-surveillance-facial-recognition-hoan-ton-that-hal-lambert-trump/
- A Kenyan court will hear a $1.6 billion lawsuit that alleges Facebook helped incite genocide in Ethiopia. https://www.compiler.news/kenya-lawsuit-meta-ethiopia-tigray-zuckerberg
- Social media platforms are spreading violent warmongering content encouraging all-out war between Ethiopia and Eritrea, again. https://www.dair-institute.org/blog/tigray-social
- Understanding how regimes criminalize dissent using both high and low tech tactics is essential for movement technologists, writes Afsaneh Rigot. https://the-decenter.ghost.io/dehumanized-by-design-autocratic-tech-and-immigration
- ARTICLE 19 warns that Kenya’s recently introduced Computer Misuse and Cybercrimes (Amendment) Bill 2024 contravenes international human rights standards. https://www.article19.org/resources/kenya-withdraw-computer-misuse-and-cybercrimes-bill-and-protect-freedom-of-expression
Technology and Society
- The European Parliament’s new AI archive tool can’t answer basic questions, like who the first president of the European Commission is. It turns out they didn’t test it properly before launch. 😬 https://www.iccl.ie/press-release/how-not-to-deploy-generative-ai-the-story-of-the-european-parliament
- Tech giants are building data centres that use vast amounts of water in some of the world’s driest areas and are building many more, an investigation by SourceMaterial and The Guardian has found. https://www.source-material.org/amazon-microsoft-google-trump-data-centres-water-use
- Microsoft fires employee protestor who called AI boss a ‘war profiteer’ at the company’s 50th-anniversary event. https://www.theverge.com/news/644769/microsoft-fires-employee-protestor-war-profiteer
- A new model shows how misinformation on social media can significantly worsen disease spread, highlighting the urgent need for public health strategies that address both contagion and online falsehoods. https://www.nature.com/articles/s44260-025-00038-y
- New experiment shows that people tend to overestimate how closely generative AI aligns with human decision-making. https://arxiv.org/abs/2502.14708
- New study finds that people view AI art as co-created by the artist and AI user, with implications for copyright laws. https://arxiv.org/abs/2407.10546
- In The New Socialist, Gareth Watkins argues that the far right embraces AI art not despite its poor quality but because of it, using it to signal disdain for labour, aesthetics, and truth. https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism
- A new Pew study finds AI experts are far more optimistic than the U.S. public about AI’s benefits, especially for jobs and daily life. https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence
- Burcu Kilic writes that DeepSeek’s open-source AI model reflects a broader trend of governments using industrial policy to build local AI ecosystems and reduce reliance on tech giants. A new paper examines how the national innovation system framework can guide AI industrial policy to foster innovation. https://www.cigionline.org/publications/ai-innovation-and-the-public-good-a-new-policy-playbook/
- New research on China's "big data-based governance" model: https://link.springer.com/article/10.1007/s41111-025-00279-1
- Machine-generated cultural content is reshaping ideas of authenticity and consciousness, requiring fresh analytical tools to understand AI's societal impact argues new paper. https://arxiv.org/abs/2503.18976
- Superbloom published a report from the Design for Internet Shutdowns Open Source Software (OSS) workshops at the Conference for Open Source Coders 2023. https://superbloom.design/learning/blog/report-launch-design-for-internet-shutdowns-in-taiwan-taxi-drivers-satellites-and-tech-the-surprising-heroes-in-taiwans-fight-against-internet-shutdowns/
- U.S. AI and tech policies often overlook rural communities, perpetuating inequality beyond the digital divide writes Jasmine McNealy. https://academic.oup.com/edited-volume/59762/chapter-abstract/508609253?login=false
- Want to be a Responsible AI Practitioner? Here are 3 takeaways from techUK’s event. https://www.techuk.org/resource/want-to-be-a-responsible-ai-practitioner-here-s-3-takeaways-from-techuk-s-event.html
- NYU Law experts explain how space law tackles orbital debris and outlines responses to threats like incoming asteroids. https://www.nyu.edu/about/news-publications/news/2025/march/how-space-law-aims-to-regulate--space-junk--and-protect-earth.html
- Elsewhere in space, the plans to put data centres in orbit and on the Moon. https://www.bbc.co.uk/news/articles/cjewvpkw7weo
- Tariffs may raise tech prices, but repair offers a resilient, cost-saving alternative by extending device lifespans. https://www.ifixit.com/News/109277/tariffs-make-repair-an-even-better-choice
- Digital misinformation threatens stability and democracy in West Africa. This briefing analyses current governance and proposes a policy model aligned with UNESCO’s Guidelines to promote responsible information integrity. https://researchictafrica.net/research/policy-approaches-to-information-integrity-in-west-africa
- How key thinkers have approached the question of AI creativity, tracing efforts to replicate novelty, surprise, and value in machines, and reflecting on the broader societal implications of AI-generated creativity. https://compass.onlinelibrary.wiley.com/doi/10.1111/phc3.70030?af=R
Privacy and Security
- What does DOGE know about you? A new quiz from New America highlights all of the sensitive personal data DOGE has access to. https://www.newamerica.org/oti/briefs/quiz-what-does-doge-know-about-you
- Don’t want to take a quiz to find out what DOGE knows about you? The NYT put it into a convenient 314 item list. https://www.nytimes.com/2025/04/09/us/politics/trump-musk-data-access.html
- ICE’s ICM database, built by Palantir and used to identify and deport people, lets agents filter individuals by hundreds of traits such as visa status, tattoos, or license plate data. https://www.404media.co/inside-a-powerful-database-ice-uses-to-identify-and-deport-people
- A UK tribunal ruled that Apple’s legal fight with the government over encrypted data access must be public, rejecting the Home Office’s push for secrecy. https://www.bbc.com/news/articles/cvgn1lz3v4no
- A new study by Consumer Reports and Wesleyan University finds that many companies are ignoring universal opt-out signals like Global Privacy Control. https://innovation.consumerreports.org/new-report-many-companies-may-be-ignoring-opt-out-requests-under-state-privacy-laws/
- How do you know that a product respects your privacy? Superbloom has created a framework to measure the way people actually experience privacy in tech products. https://superbloom.design/learning/blog/measuring-the-privacy-experience
- IPv6 makes it harder to find devices on a network because addresses are more private and random. FScan6 is a new tool that finds way more devices than older methods by listening quietly and scanning smartly—without slowing things down. https://www.nature.com/articles/s41598-025-95680-w
- CISA and allies warn of rising “fast flux” DNS attacks used by cybercriminals and nation-states to hide malware infrastructure. They urge orgs to improve DNS visibility, detect anomalies, and consider Protective DNS services. https://www.theregister.com/2025/04/03/cisa_and_annexable_allies_warn
- Travel already involves intense surveillance, but the US is now pouring resources into even more invasive monitoring methods. Other governments may follow suit. https://privacyinternational.org/news-analysis/5552/us-border-surveillance-expansion-has-global-implications
- Creating technology in 2025? Superbloom has a feature wish lists from activists, journalists, and human rights defenders. https://superbloom.design/learning/blog/feature-guide-for-high-risk-contexts-2025
Upcoming Events
- Berlin Tech Workers Coalition will host the largest English speaking and tech worker-led conference in Germany, together with their trade unions ver.di B+B and IG Metall Berlin. April 11, 9am CET. Berlin, DE. https://techworkersberlin.com/events/tech-conference-2025
- Beautiful Solutions comes to Denver! Celebrate a new book on the solidarity economy with local leaders building community food systems. May 1, 6pm MDT. Denver, CO. https://www.eventbrite.com/e/book-launch-for-beautiful-solutions-a-toolbox-for-liberation-tickets-1316548374629
Careers and Funding Opportunities
- AI Accountability Lab, School of Computer Science & Statistics, Trinity College Dublin is looking for a postdoctoral researcher in algorithmic accountability. https://www.adaptcentre.ie/careers/postdoctoral-researcher-in-algorithmic-accountability
- There are five open vacancies at the Center for Countering Digital Hate, including Chief Development Officer, Head of Public Affairs, Director of U.S. Public Affairs, EU Policy Officer and Development Manager. https://counterhate.com/jobs
- The Wikimedia Foundation is looking for…
- An Engineering Manager to join and lead the Trust and Safety Product Team. https://job-boards.greenhouse.io/wikimedia/jobs/6669470?gh_src=1b9a160c1us
- And a software engineer to join the Trust and Safety Product team. https://job-boards.greenhouse.io/wikimedia/jobs/6443078?gh_src=bc0363231us
- The Gates Foundation is hiring an Artificial Intelligence Portfolio and Platform Lead. https://gatesfoundation.wd1.myworkdayjobs.com/Gates/job/Seattle-WA/Artificial-Intelligence-Portfolio-and-Platform-Lead--2-Year-LTE--_B020895-2
- AI Security Institute is hiring a Criminal Misuse Workstream Lead. https://www.aisi.gov.uk/careers/apply?gh_jid=4399317101
- World Economic Forum is hiring a Specialist, AI Technology and Innovation – AI Governance Alliance. https://weforum.wd3.myworkdayjobs.com/en-US/Forum_Careers/job/Specialist--AI-Technology-and-Innovation---AI-Governance-Alliance_R3332
- Microsoft is hiring a Research Sciences Resident (AI for Good) Intern. https://jobs.careers.microsoft.com/global/en/job/1812471/Research-Sciences-Resident-(AI-for-Good)-Intern
- Security bounty fund to sponsor contributors who responsibly disclose security vulnerabilities in popular open source Fediverse software. https://nivenly.org/blog/2025/04/01/nivenly-fediverse-security-fund
- auDA Churchill Fellowships to support internet research and development. https://www.auda.org.au/news-insights/statements/auda-churchill-fellowships-to-support-internet-research-and-development
Opportunities to Participate
- Common Edge is seeking submissions on how digital power is shaped, challenged, and reimagined in Asia. Contributions can explore state and corporate control, grassroots resistance, and alternative tech futures. https://www.commonedge.asia/digitalpower
- Solid, an open-source initiative led by Sir Tim Berners-Lee, is looking to recruit members for the Solid Advisory Committee. https://theodi.org/news-and-events/news/solid-advisory-committee
- The next issue of Branch magazine will explore the theme Attunement: Designing in an Era of Constraint. This open call invites the Branch community to suggest article ideas. https://branch.climateaction.tech/open-call-for-contributions
- The UN Special Rapporteur on cultural rights is calling for input on how AI and generative AI impact creativity, ahead of a report for the 80th General Assembly. https://www.ohchr.org/en/calls-for-input/2025/call-contributions-artificial-intelligence-and-creativity
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Shaping AI: How data redefines the obligations and responsibilities to our future
By Mallory Knodel
"AI is the future" reflects a belief that data will be a key driver of future developments and changes in society, the economy, and daily life. In that sense, AI is the past and AI is the present, too. It is perhaps far more interesting to outline the shape of our collective futures not in terms of what roles the technology will have, but in terms of how it proposes to change our own roles, responsibilities and relationships to a future society, future economies and our own future lives. The following essay begins by narrowing the scope of AI technology to its practical applications in society, and it ends with an analysis of obligations and responsibilities of modern governance in an AI future.
Introduction
In the 2021 book What We Owe Each Other: A New Social Contract for a Better Society author Minouche Shafik predicts that automation will not replace jobs but change them, adding “people who have skills that are complementary to robots will fare the best. Those complementary skills include things like creativity, emotional intelligence and an ability to work with people.” If automation is to have this impact on individuals, this leads us to wonder, what will automation do to change government and governance?
Even at its best, automation cannot replace governmental institutions anymore than it could replace human creativity and emotional intelligence. At its most mediocre and prone to failure is the AI whose purpose is to replace government functions in the name of austerity. An overall programme of governance, say extending digital identity to all citizens, requires national budgets to make an upfront investment, not expect a windfall.
At its worst, AI serves as a cover for power. Perceptual AI applications, wielded by the state, can conceivably create pretext and excuses for any manner of human rights abuses, just as we have seen from the private sector. AI that “generates” will always be error prone, especially for the most marginalised. These vectors for failure occur not just at the national level but globally, perpetuating neocolonial dominance. For example, when nearly all of Africa's internal internet traffic still transits through Europe, it creates a dependency that limits the continent's digital sovereignty and control over its own data, potentially exacerbating existing global inequalities and power imbalances.
Seeing is not knowing
AI encompasses a range of automation techniques, including computer vision and natural language processing. With the latest developments in machine learning, the latter has gained prominence in the era of big data and neural networks and is referred to as “generative AI,” explained in the next section. Machine learning grew from the need to manage large data systems, aimed at predicting financial investments in various complex systems like supply chains and environmental management, ostensibly with the aim to influence outcomes and take action. Today there is more data than ever and that is not a coincidence. Data is what fuels modern day automation. Modern ML systems leverage vast computational infrastructure to process extensive data, requiring ever more data to improve their models and have hastened datafication and digitalisation in every sector. However, AI has not developed without its discontents all along the way, as critiqued by scholars who have long pointed out the implications of increasing automation and computerization on society. These critiques highlight the human aspect of AI, particularly in design and supervision, where biases can be inadvertently introduced.
That governments are now turning to data-driven methods comes as no surprise. In the current landscape, datafication is central to value creation, with national strategies and budgets increasingly relying on AI's perceived societal benefits. The effectiveness of ML models largely depends on the quantity of data they process, operating under the assumption that more data leads to more accurate outputs. These models translate quantified phenomena of the world, aiming to approximate truth based on the volume of knowledge (data) they process. This perspective, influenced by epistemic cultures, assumes that a comprehensive understanding of the world can be achieved with sufficient data. Yet, the primary contribution of modern ML AI lies in its ability to discern patterns from vast unstructured data, with the improvement of its "sense-making" ability hinging on the availability of more data.
UN member states, failing to meet Sustainable Development Goals (SDGs), see AI as a potential savior, hoping it will aid in their development. However, this overlooks the trade-offs for citizens' rights and civil liberties. More unfortunately governments investing in AI solutions to sustainable development are also neglecting the people power that the SDGs actually require. High-level UN efforts are not only efforts to build capacity and create guardrails, but to give member states some political cover with governance for the provision of neocolonial multinational services (DPI). The biggest benefactors of more widespread capacity to collect civic data within well defined compliance parameters are the existing big tech companies, none of whom have established suitable human rights track records.
Without stronger institutions, a datafied and digitized approach to governance is merely a cost-effective derivative of the state administered by unaccountable, all-seeing megacorporations.
Generative is extractive
Generative artificial intelligence involves the extrapolation, interpolation, and completion of patterns, whether real or imagined, found in large datasets. The success or failure of generative AI, much like perceptive AI, largely depends on both the quantity and quality of the real-world data available to the AI model. This relationship underscores how generative AI is inherently extractive, involving data collection about individuals, group behavior, records, and various social and economic "signals," including texts, speech, music, images, and art.
Embedded within current generative AI tools, which have garnered significant attention, are complex intellectual and property rights issues. Additionally, generative AI's potential for misuse, particularly in ways that undermine trust in the information ecosystem, is a significant concern. Social issues such as bias and fairness in machine learning model decisions and actions pose technical challenges, especially when the available large datasets at best reflect our biased and unfair world.
These issues become particularly pertinent when AI is applied to governance. The widespread creation and dissemination of misinformation can erode trust in institutions, and efforts to control the flow of information risk becoming repressive. When states employ automation for tasks like delivering social services or managing civil rights processes, such as voting and identity verification, the inherent biases in generative AI call for enhanced transparency and accountability in governance mechanisms.
Governance obligations
Establishing a set of principles and a common language is crucial for AI governance, much as it has been for internet governance. Successful international cooperation in other domains, such as space exploration and environmental conservation, has drawn inspiration from the internet governance community’s model of collective, multistakeholder stewardship of shared resources. However, it is critical to acknowledge that applying the governance models of these relatively technical fields to the complex social, economic, and cultural debates surrounding automation could undermine human rights and civil liberties.
Governance, by its nature, requires setting norms through definitions and principles, yet many of the essential nuances in AI governance extend beyond mere technicalities. A key objective is to involve more countries from the global south in internet governance, a sector where enhanced cooperation has previously faltered. Although internet access remains a pressing concern, governments are now showing a renewed interest in investment, a change from the less engaged stance observed during the period of 2003-05. Focusing on the Sustainable Development Goals (SDGs) rather than solely on automation will likely bring a more diverse group of stakeholders into AI governance, compared to the current composition in internet governance.
It is important to recognize that AI in itself will not solve the challenge of rolling out internet connectivity. Corporate interests, driven by profit and market dependence, often hinder access expansion. The United Nations General Assembly (UNGA) assumes that current governance institutions cannot be developed to be more inclusive, effective, and efficient. This is a crucial issue, as there is no existing multilateral framework for digital cooperation and application for SDGs, despite efforts like the CSTD and STI Forum in New York. The introduction of AI has intensified these discussions, which were already contentious within the internet governance community. Governance over technology must be a collaborative effort across multiple stakeholders.
Transforming global governance mechanisms from multilateral to multistakeholder models is another significant goal. This approach would provide civil society and academics with a vital advisory role, functioning as a resource commission to assist industries and states, and could also offer support during emergencies. The scope of ambition for principles, objectives, actions, and actionable deliverables is yet to be defined, particularly in terms of how proposals might differ between the global south and the global north.
Domain and application-specific AI should be confined to its intended area of application. However, this limitation can significantly narrow market opportunities, especially in developing countries, where data exploitation is a major concern. Issues of trustworthiness, authenticity, and content provenance have limited application scope, and schemes requiring data sorting into verified/trusted and unverified/untrusted categories are unlikely to succeed.
Shared responsibility
The stellar achievement of global internet governance has shown that a decentralized approach is necessary. For AI governance, multistakeholder governance envisions a world in which citizens have control over their data, granting permissions as needed, while governments responsibly manage civic duties, meet their social responsibilities with respect and security, and uphold their obligations to human rights.
The ability for an “AI revolution” to achieve Sustainable Development Goals (SDGs) is debated. Countries must not rely solely on AI for sustainable development; broader efforts and substantial investments in people power and institutions are required.
Various intergovernmental governance bodies, such as economic groupings, the WTO, WHO, and UNESCO, are essential in discussions about shared responsibility to govern our digital futures. Coordinating with these organizations maximizes resources and impacts, particularly in developing capacity-building frameworks for policymakers in every jurisdiction. However, there is a risk of "governance washing"—where consultation appears robust but occurs at the expense of meaningful action. Governance should proactively address potential harms, emphasizing the decision of "whether" to implement certain AI technologies, not just "how" to do so. Perhaps more immediately and likely is the outcome that without mandate or direct application, any high-level principles will simply have little impact.
At the country level, states, as the primary holders of human rights obligations, should play a central role in sustainable development and how AI technologies are implemented to assist. Exploring open-source approaches and high-level validations of best practices could be particularly beneficial. However, such strategies must be tailored to individual country contexts to avoid compromising human rights commitments.
The foundation of AI application in critical sectors must be a strong human rights framework. Without this, AI is likely to exacerbate, rather than alleviate, problems, especially in sectors already violating human rights. Concerns about privacy and security—including data breaches, misuse of AI for surveillance, and other vulnerabilities—must be addressed, as AI, like any tool, has the potential to violate human rights.
Conclusion: The future
Well-worn dystopian predictions of a world governed by AI are plentiful but rarely instructive. Some would argue our imperfect world of today is already assisted by digital technologies that are put to work to maintain inequality between the frictionless existence of the powerful, and the loss of civil liberties and human rights for the rest. The truth will arrive somewhere in the middle.
Technocratic rule of the social compact disempowers people, who are unable to hold accountable the privatized extensions of the state: algorithms that take decisions about their lives.
This is an untenable disempowerment, the fault line of which is economic inequality. Privileged classes will have few borders and retain their agency globally. Today average economic privilege means having one’s needs without wanting, however in the future these needs can only be met and delivered with digital technology, leaving the poor fully behind in a state that can no longer provide those in need with services that do not rely on digital technology.
Instead we could invest in technology that supports life on earth sustainably. We should be in partnership with technology in a way that enhances the generative capabilities of humans. We must demand of our governments that problem-solving tech solutions extend, not abdicate, their obligations to our shared social compact.