Leading by Example: Civil Society Use of AI

This week, in our main story, IX's Ramma Shahid Cheema, feminist media and advocacy expert, discusses how civil society organizations are increasingly turning to AI tools to boost their efficiency in policy, advocacy, and communications, while also confronting the ethical challenges that come with their use.
But first...
Artificial Intelligence, Bias and the Courts
IX’s Mallory Knodel is giving a talk next week at the Michigan Judges Conference on the evolving relationship between artificial intelligence and the justice system, focusing on how courts can respond to the increasing use of AI in decision-making processes.
As AI becomes increasingly embedded in public life, from pretrial risk assessments and bail decisions to housing applications, it is shaping outcomes in ways that raise important questions about bias, fairness, and transparency. Mallory will explore how these systems can reflect and reinforce existing inequalities, and what role courts can play in identifying and addressing those harms.
While the talk isn’t public, you can read the paper it’s based on: Artificial Intelligence and the Courts: Materials for Judges, co-authored with Michael Karanicolas.
For more on how data and automation are reshaping public institutions and governance, I also recommend Mallory’s piece for IX, Shaping AI: How data redefines the obligations and responsibilities to our future.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
From the Group Chat 👥 💬
This week in our Signal community, we got talking about two main stories:
- The group was pleased to see that NSO Group lost its court case to WhatsApp for its role in deploying the Pegasus spyware to target users, marking a significant victory for digital rights and accountability. Below are some of the angles we discussed:
- Access Now traces the fallout from a landmark $168M ruling against NSO Group, spotlighting how spyware built to target activists and journalists finally met accountability in court, and why this case could reshape global surveillance norms. https://www.accessnow.org/press-release/whatsapp-v-nso-case-damages-decision
- The Washington Post article highlights that NSO’s $50 million annual R&D budget and recent financial losses show the $167 million damage award is a significant financial burden for the company. https://www.washingtonpost.com/technology/2025/05/06/nso-pegasus-whatsapp-damages
- Courthouse News Service details the courtroom battle’s unique focus on the financial damages awarded and the broader implications for accountability in global surveillance. https://www.courthousenews.com/meta-wins-168-million-in-damages-from-israeli-cyberintel-firm-in-whatsapp-spyware-scandal
- And Casey Newton points out, it’s more than money. It establishes that spyware companies can be held liable under the Computer Fraud and Abuse Act and that spyware companies can't claim immunity for their actions solely because their clients are government agencies. https://www.platformer.news/whatsapp-nso-group-judgment-spyware
- The Signal saga continues, the group shared several in-depth looks at TeleMessage, the company providing a modified version of Signal for the U.S. Government.
- UNICORN RIOT reverse engineered the tech stack behind a Signal clone used by U.S. officials, uncovering a bizarre mix of Israeli surveillance software and WordPress plugins. https://unicornriot.ninja/2025/signalgate-meets-wordpress-outgoing-national-security-advisers-phone-dumps-messages-via-israeli-app
- Micah Lee details how TeleMessage SGNL archives Signal messages, including disappearing messages, which may be stored in non-secure locations like Gmail. https://micahflee.com/tm-sgnl-the-obscure-unofficial-signal-app-mike-waltz-uses-to-text-with-trump-officials
- Not surprising given the above, 404 Media found that TeleMessage was hacked, exposing sensitive communications. https://www.404media.co/the-signal-clone-the-trump-admin-uses-was-hacked
- Want to join the conversation? Paid subscribers have access to our Signal community. (Which we promise doesn't use TeleMessage.)
Internet Governance
- The Wikimedia Foundation is suing the UK government over new Online Safety Act rules, warning they could threaten Wikipedia’s integrity and put its volunteers at risk. https://medium.com/wikimedia-policy/wikipedias-nonprofit-host-brings-legal-challenge-to-new-online-safety-act-osa-regulations-0f9153102f29
- The National Institute of Standards and Technology (NIST) has released a draft update to its Secure DNS Deployment Guide (SP 800-81 Rev. 3) and is seeking public comments, addressing new threats and cloud-era challenges to DNS infrastructure security. https://federalnewsnetwork.com/technology-main/2025/04/nist-looks-to-improve-security-of-a-foundation-pillar-of-the-internet
- Straightforwardly named, Third-Party Cookies Must Be Removed argues that third-party cookies harm the web and must be replaced with more secure, purpose-built technologies. https://www.w3.org/2001/tag/doc/web-without-3p-cookies
- New article explores QUIC’s standardization process, highlighting Google’s role in shaping the Internet’s communication infrastructure and the power dynamics between Big Tech, the Internet industry, and states. https://journals.sagepub.com/doi/10.1177/14614448251336438
- APNIC Chief Scientist Geoff Huston says Bangladesh’s internet future is being held back by low IPv6 adoption, fragmented ISP market structure, and limited routing security deployment. https://www.thedailystar.net/tech-startup/news/bangladeshs-internet-future-hampered-fragmentation-says-apnic-chief-scientist-3882861
- Chinmayi Arun's paper “The Silicon Valley Effect” describes how powerful AI companies shape transnational legal systems to avoid accountability and entrench harm, calling for globally coordinated regulation in the public interest. https://law.stanford.edu/wp-content/uploads/2025/04/SJIL_61-1_Arun.pdf
- Senate Democrats propose banning presidents from hawking crypto after Trump’s meme coin contest promised White House dinners to top holders. https://www.theverge.com/news/662313/meme-coin-stablecoin-genius-act-corruption-trump
- Trump is said to have made millions, both for his campaign and personally through his involvement with the $TRUMP meme coin, which could allow anonymous donors, including potentially foreign nationals, to buy access to him. https://www.cnbc.com/2025/05/05/trump-crypto-memecoin.html
- The U.S. government also registered TheTrillionDollarDinner.Gov and two other domains around the time of the announcement, suggesting potential connection between Trump's memecoin dinner and official government infrastructure. https://www.404media.co/email/f527199d-c300-4a2e-96ad-6ba89d02aee0
- Over 10,000 public comments on the Trump AI Action Plan reveal Americans are deeply engaged with AI policy, voicing concerns about copyright, labor, and transparency. https://www.compiler.news/trump-ai-action-plan
- The App Store Freedom Act would require Apple and Google to allow third-party app stores and prevent them from blocking developers from using third-party payment systems. https://www.theverge.com/news/662180/app-store-freedom-act-apple-third-party-app-stores
- A new report from Pollicy explores how digital assets are managed and inherited in Africa, revealing how rapid tech adoption in Malawi, Ghana, and Tanzania is colliding with traditional practices and gaps in the evolving legal frameworks. https://pollicy.org/resource/mine-is-yours
- The U.S. antitrust lawsuit against Google now raises concerns that the company could use its search monopoly to dominate the A.I. space, with potential consequences for competition. https://www.nytimes.com/2025/05/01/technology/google-antitrust-trial-ai.html
- Wikimedia Foundation’s new AI strategy focuses on supporting human volunteers and enhancing Wikipedia's knowledge integrity. https://wikimediafoundation.org/news/2025/04/30/our-new-ai-strategy-puts-wikipedias-humans-first
- A U.S. federal judge questioned Meta's fair use defense for AI training. https://www.pcgamer.com/software/ai/us-federal-judge-on-metas-ai-copyright-fair-use-argument-you-are-dramatically-changing-you-might-even-say-obliterating-the-market-for-that-persons-work
- "Techno-diplomacy" plays a crucial role in managing the global radio-frequency spectrum, where decisions shaped by corporate lobbying and geopolitical interests often undermine public access and control of the internet. https://pitg.network/news/2025/05/02/connectivity-spectrum.html
- At summits and in speeches, African leaders promise to harness AI for development. But without investment in power, connectivity, and people, the continent risks replaying old failures in new code says Daniel Ekonde. https://africasacountry.com/2025/04/africa-and-the-ai-race
- This paper argues that protecting human autonomy in AI requires focusing on infrastructures, designing for collectives, and adopting realist approaches to empower communities through democratic governance and technological self-determination. https://osf.io/preprints/osf/uaewn_v2
- ARTICLE 19 is proud to present Guess Who! an interactive tool lifting the lid on which actors hold the most power in the standardization of internet infrastructure. https://www.article19.org/resources/internet-standards-almanac-whos-really-shaping-the-internet
Digital Rights
- Lebanese NGO Social Media Exchange (SMEX) has called on Samsung to stop pre-installing AppCloud, an unremovable, data-harvesting app developed by Israeli firm ironSource, on devices sold in West Asia and North Africa. https://smex.org/open-letter-to-samsung-end-forced-israeli-app-installations-in-the-wana-region
- Reddit announced plans to implement user verification to keep out bots while preserving anonymity. https://techcrunch.com/2025/05/06/reddit-will-tighten-verification-to-keep-out-human-like-ai-bots
- This came after it was found that Reddit users were subjected to an AI-powered experiment without their consent. https://www.newscientist.com/article/2478336-reddit-users-were-subjected-to-ai-powered-experiment-without-consent
- New manifesto from Pollicy says digital technologies are reshaping work across Africa, but many workers face low wages, algorithmic exploitation, and lack of legal protections, especially women and marginalized groups. These challenges are often ignored by current policy frameworks. https://pollicy.org/resource/fair-digital-kazi-manifesto
- Google is rolling out its Gemini AI chatbot to kids under 13. Who wouldn’t want their child chatting with an AI that might get things slightly wrong? https://www.nytimes.com/2025/05/02/technology/google-gemini-ai-chatbot-kids.html
- This paper explores how third-party monitoring in the humanitarian sector inadvertently reinforces power asymmetries. https://journals.sagepub.com/doi/10.1177/20539517251328250
Technology for Society
- The Social Web Foundation’s 2024 Annual Report highlights its progress since launching in September 2024, including key partnerships with Mastodon, Ghost, and Automattic, and contributions to protocol development for a decentralized, user-centric social web. https://socialwebfoundation.org/2025/05/06/reflecting-on-our-first-year-the-social-web-foundations-2024-annual-report
- “Project Ludicrous”: a $500 billion plan by Oracle, OpenAI, and partners to dominate AI through massive U.S. data centers, speculative funding, and nuclear power. https://thebaffler.com/latest/project-ludicrous-northwood
- “On Algorithms and Propaganda”: Renée DiResta, author of Invisible Rulers: The People who Turn Lies into Reality, in conversation with Audrey Borowski, historian and philosopher of science. https://www.youtube.com/watch?v=sQa-ekKmC1s
- Ajeya Cotra and Arvind Narayanan discuss whether AI progress has a natural speed limit, how transfer learning might leapfrog real-world constraints, and if we’ll get enough warning before AI becomes dangerous. https://asteriskmag.com/issues/10/does-ai-progress-have-a-speed-limit
- Ștefan Baghiu’s paper “Loony Platform Politics: The Romanian Far-Right Performance and the Digital Dystopia of 2024” analyzes how platform capitalism shaped viral, gig-like campaigns by far-right candidates in Romania’s presidential race. https://www.tandfonline.com/doi/full/10.1080/25739638.2025.2482400
- In this Dutch-language interview, climate and tech researcher Fieke Jansen warns that the growing energy demands of data centers require urgent societal action to rethink our extractive digital economy. https://www.nrc.nl/nieuws/2025/04/13/we-moeten-anders-nadenken-over-onze-vervuilende-digitale-economie-a4889457
- How The Green Web Foundation’s carbon.txt and the Technology Carbon Standard (TCS) work together to make digital emissions data open, machine-readable, and verifiable across the web. https://www.thegreenwebfoundation.org/news/using-carbon-txt-with-tcs/
- A Neuralink patient with ALS is using Musk’s brain implant and Musk’s AI chatbot, Grok, to communicate, raising questions about authorship, agency, and the merging of brains with branded tech. https://www.technologyreview.com/2025/05/07/1116139/this-brain-implant-gets-a-boost-from-generative-ai
- New report from Pollicy highlights how digital platforms across Africa are reshaping work, with a focus on the inequalities faced by workers, especially women and marginalized groups. https://pollicy.org/resource/2024-datafest-africa-report
- If you've used GPT-4o recently, you may have noticed it was in full-on kiss-up mode. It wasn’t intentional says OpenAI. https://openai.com/index/expanding-on-sycophancy
- ICYMI, this is a known problem where human feedback (RLHF) can lead AI models to prioritize responses that align with a user’s beliefs, sometimes at the expense of truthfulness. https://www.anthropic.com/research/towards-understanding-sycophancy-in-language-models
- Casey Newton points out that this trend risks deepening misinformation and user manipulation. https://www.platformer.news/meta-ai-chatgpt-glazing-sycophancy
- The Protocol Oral History Project shares the stories of protocol artists who shape rules, standards, and norms in areas like technology, diplomacy, Indigenous traditions, and radical subcultures. https://protocol.ecologies.info
- Western donors are cutting budgets, but the aid model they built—rooted in control, dependency, and depoliticization—still shapes Africa’s development says writer, researcher, and PhD student Marjorie Namara Rugunda. https://africasacountry.com/2025/04/as-aid-ends-empire-endures
- What 2,000 years of Chinese history reveals about today’s AI-driven technology panic and the future of inequality. https://theconversation.com/what-2-000-years-of-chinese-history-reveals-about-todays-ai-driven-technology-panic-and-the-future-of-inequality-254505
- Arvind Narayanan and Sayash Kapoor argue that AI should be seen as a normal technology, like electricity or the internet, rather than as a separate, superintelligent entity, challenging both utopian and dystopian views. https://knightcolumbia.org/content/ai-as-normal-technology
- The Social Web Foundation’s Long-form Text project is working to improve the representation of multi-paragraph text across the Fediverse, addressing issues with misformatted or abbreviated text on platforms like Mastodon and Threads. https://socialwebfoundation.org/2025/05/01/steps-forward-in-long-form-text
Privacy and Security
- Researchers use reverse engineering and formal modeling to evaluate how secure WhatsApp’s multi-device group messaging really is, highlighting both its protections and its potential gaps. https://eprint.iacr.org/2025/794
- Something to share with friends who ask, “What’s the big deal with encryption backdoors?”—this primer from Internet Society breaks it down, with plant-watering analogies and all. https://www.internetsociety.org/blog/2025/05/what-is-an-encryption-backdoor
- The curl project says it’s under siege by AI-generated “slop” vulnerability reports that waste time, clog bug bounty systems, and threaten open source security workflows. https://arstechnica.com/gadgets/2025/05/open-source-project-curl-is-sick-of-users-submitting-ai-slop-vulnerabilities
Upcoming Events
- Green IO New York offers actionable tech sustainability and hands-on perspectives from scaling green IT experts. May 14-15. New York, NY. https://www.w3.org/events/conferences/2025/green-io-new-york
- Digital Resilience Forum explores Canada's role in internet freedom and cybersecurity with keynote addresses, expert panels, and a technology showcase. May 21, 12:30pm ET. Ottawa, Canada. https://equalit.ie/digital-resilience-forum
- Hands-on DSA workshop explores official APIs and independent tools like SOAP for auditing systemic risks on major platforms, blending role-play with legal and ethical insights. May 22, 5:20pm CEST. Lausanne, Switzerland. https://www.cpdpconferences.org/workshops/auditing-social-media-platforms-public-non-public-and-alternative-data-access-methods-under-the-dsa-gdpr-for-public-interest-research
- The Democracy Alive Summi will explore the intersection of digital technologies and democracy—and the profound implications for the EU’s fundamental values. May 22, 9:00am. Brussels, Belgium. https://europeanmovement.eu/event-list/democracy-alive-2025
- Estimating digital carbon emissions workshop from Green Web Foundation. June 12, 3:00pm CET. Online. https://pretix.eu/greenweb/ws-12-06-25
- Applications are open until May 31st for the 2025 Global Gathering. It will bring together civil society, technologists, cybersecurity experts, and policy advocates to discuss urgent digital rights challenges. September 8–10. Estoril, Portugal. https://www.digitalrights.community/blog/2025-global-gathering-application
- Tickets are now available for MozFest Barcelona. This year’s theme is Unlearning. November 7-9. Barcelona, Spain. https://www.mozillafestival.org
Careers and Opportunities
- UNICEF Venture Fund offers up to US$100K for early-stage startups using AI, data science, or blockchain to improve healthcare and socio-economic participation for women and girls. Deadline: May 8. https://www.unicef.org/innovation/FemTechCall
- The Ford Foundation and Omidyar Network are funding applied research that explores the link between responsible technology practices—particularly in AI—and financial performance. Apply by May 16. https://www.fordfoundation.org/wp-content/uploads/2025/04/Ford-ON-RFP-April-2025-Tech-Capital-Markets.pdf
- Submit a proposal for MozFest Barcelona. This year’s theme is Unlearning. Submit your proposal by May 21. https://pretalx.com/mozilla-festival-2025/cfp
- Three fully-funded interdisciplinary PhD positions on governance by data infrastructure in Amsterdam. Apply by May 31. https://www.academictransfer.com/en/jobs/351725/three-fully-funded-interdisciplinary-phd-positions-on-governance-by-data-infrastructure
- Submit a concept note for the Spyware Accountability Initiative. This initiative seeks proposals from civil society organizations, journalists, and others working to limit spyware proliferation or engage in regulatory, litigation, or investigative efforts. Submit by June 13. https://stopspyware.fund/apply
- New_Public seeks local partners to co-design a new social product for local communities, aiming to improve online public spaces. Interested communities can collaborate with New_Public on pilot projects to enhance local digital stewardship. https://newpublic.substack.com/p/exploring-a-new-social-product-for
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Civil Society: Leading by Example on AI
By Ramma Shahid Cheema
Civil society sits in a dual role: it is both a user of artificial intelligence tools and an advocate for responsible AI governance. This gives it a unique opportunity to lead by example when it implements these new technologies. As organizations turn to AI to speed up workflows, from analyzing policy documents to generating campaign content, they also confront the ethical risks that come from these systems: bias, opaque decision-making, and the potential to reinforce structural inequalities like gender and racial discrimination.
Civil society groups increasingly use AI tools to boost efficiency in policy, advocacy, and communications. However, this accelerated adoption may be outpacing thoughtful understanding of the tradeoffs. Advocates now face urgent questions:
Does the pursuit of efficiency undermine trust and depth?
How do civil society organizations (CSOs) address the stark ethical and equity concerns surrounding AI implementation in advocacy work?
Civil society plays a crucial role in shaping public discourse, especially around justice, rights, and equity, and there is a growing consensus that it must lead by example in modeling ethical AI use while also pushing for equitable governance frameworks.
Practical uses of AI in Civil Society Organizations
Document Analysis: Making Sense of Complex Information
Many civil society organizations no longer rely solely on manual analysis of documents and meeting transcripts. These organizations report comfort with using AI for reading, summarization, and document analysis, tools that can reduce the burden on small or under-resourced teams.
AI helps organizations process:
- Legislative proposals and regulatory documents
- Meeting transcripts and public hearing records
- Research papers and technical reports
- Stakeholder position papers and public comments
Policy teams are also using AI tools to simulate complex areas like environmental or economic reform. Large language models (LLMs) can generate scenarios to evaluate the potential impacts of proposed policies, helping to identify both intended and unintended consequences. This approach supports more informed advocacy and policy development by enabling teams to assess the effectiveness of interventions before they are implemented.
Monitoring Public Sentiment
Tools like Meltwater, Cision, or Talkwalker use natural language processing (NLP) to monitor news cycles, track public sentiment, and detect emerging issues. These tools have made real-time media monitoring faster, more scalable, and more accessible to smaller organizations, which can now identify opportunities for intervention.
But algorithmic misinterpretation remains a risk, especially when issues involve marginalized communities or nuanced language. Without human review, these errors can go unchecked.
The Centre for Democracy and Technology's (CDT) report on AI & Machine Learning explains that algorithms often replicate past biases and have the potential to perpetuate racial, economic, and social prejudices.
Boosting Advocacy and Raising the Stakes
AI now plays a growing role in civil society advocacy: from analyzing complex policy proposals to generating initial response frameworks, scenario planning, and outcome forecasting. AI can aid in developing personalized campaigns and advocacy messages, and translation tools have removed significant barriers to cross-border collaboration. However, as the Ada Lovelace Institute notes, these tools require careful review to avoid cultural mistranslations on sensitive topics.
AI Disruption in Civil Society Organizations
Advocates for ethical AI believe it should serve the common good, but that raises an important question: Who decides what "good" looks like? Civil society leaders want to answer critical questions: who is accountable for content produced by AI, and what safeguards are needed to ensure these tools uphold human rights?
These questions are especially relevant in the context of coalition building and public engagement, where organizations must communicate complex policy positions to diverse audiences while representing the interests of the communities they serve.
But deciding what "good AI" looks like isn't happening on equal ground. The people and institutions with the most resources, infrastructure, and technical fluency are often the ones shaping the narrative, building the tools, and setting the norms. When smaller or under-resourced organizations can't access these same tools or influence how they're developed, they risk being excluded from the decisions that matter most.
The growing "AI capability gap" between well-resourced international NGOs and smaller community-based organizations threatens to further concentrate power among already privileged institutions. Providing access to AI training and literacy can help close the gap, but the ongoing funding crises, especially in the Global South, make it difficult for many organizations to keep up. Rachel Adams, the CEO of the Global Centre on AI Governance and author of The New Empire of AI: The Future of Global Inequality, notes that the cost of closing the skill gap may be too high for struggling economies to bear.
If civil society is to remain a credible advocate for equity and rights-based AI governance, it must also embody those principles in how it adopts and applies these technologies. Modeling inclusive, transparent AI use isn’t just a best practice; it’s an imperative.
What Now? Guidance for Civil Society Organizations
As mentioned earlier, the differences and capability gaps between civil society organizations require adopting tailored approaches to ensure equity. The widening digital and AI adoption divide risks excluding marginalized groups and under-resourced communities, reinforcing the power imbalances many CSOs seek to challenge.
I recommend CSOs implement the following strategies to bridge the capability gap while maintaining their advocacy integrity:
- Start small: If you're an organization facing resource or skills barriers, begin with low-risk, clearly scoped AI use cases like summarization or translation. Join coalitions or peer networks to share tools, training, and support, especially if resources are limited.
- Establish clear usage policies: Organizations should develop guidelines that define when and how AI tools can be used, outlining acceptable use cases, approval processes, and accountability structures aligned with organizational values.
- Develop an internal AI integration policy: Establish a clear, values-aligned framework for how AI tools are adopted and governed across the organization. This should include ethical safeguards, oversight, and alignment with community needs and values to ensure AI reinforces rather than undermines the organization’s mission.
- Prioritize human oversight: AI should support, not replace professional judgment and community voices. Whether drafting advocacy materials, monitoring sentiment, or responding to policy developments, human review remains critical.
- Invest in staff training: Teams need practical, ongoing training in the ethical and strategic use of AI. This includes understanding the limitations of current tools, recognizing bias, and knowing when to escalate to a human.
- Disclosure and transparency: Advocates should proactively discuss AI usage with coalition partners and stakeholders, especially in contexts where synthetic content is involved.
As AI adoption accelerates, civil society must approach these tools with both openness to innovation and a critical eye. The real question isn’t just how CSOs can use AI more effectively, but how they can use it in ways that value democracy and human dignity. How will civil society not only adopt AI ethically in its own work but also step up to shape AI governance systems that reflect our highest ideals rather than merely our efficiency imperatives.