Leading by Example: Civil Society Use of AI

Leading by Example: Civil Society Use of AI
Photo by Tim Mossholder / Unsplash

This week, in our main story, IX's Ramma Shahid Cheema, feminist media and advocacy expert, discusses how civil society organizations are increasingly turning to AI tools to boost their efficiency in policy, advocacy, and communications, while also confronting the ethical challenges that come with their use.

But first...

Artificial Intelligence, Bias and the Courts

IX’s Mallory Knodel is giving a talk next week at the Michigan Judges Conference on the evolving relationship between artificial intelligence and the justice system, focusing on how courts can respond to the increasing use of AI in decision-making processes.

As AI becomes increasingly embedded in public life, from pretrial risk assessments and bail decisions to housing applications, it is shaping outcomes in ways that raise important questions about bias, fairness, and transparency. Mallory will explore how these systems can reflect and reinforce existing inequalities, and what role courts can play in identifying and addressing those harms.

While the talk isn’t public, you can read the paper it’s based on: Artificial Intelligence and the Courts: Materials for Judges, co-authored with Michael Karanicolas.

For more on how data and automation are reshaping public institutions and governance, I also recommend Mallory’s piece for IX, Shaping AI: How data redefines the obligations and responsibilities to our future.

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

From the Group Chat 👥 💬

This week in our Signal community, we got talking about two main stories: 

  1. The group was pleased to see that NSO Group lost its court case to WhatsApp for its role in deploying the Pegasus spyware to target users, marking a significant victory for digital rights and accountability. Below are some of the angles we discussed:
  1. The Signal saga continues, the group shared several in-depth looks at TeleMessage, the company providing a modified version of Signal for the U.S. Government.

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Opportunities

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn.

Civil Society: Leading by Example on AI

By Ramma Shahid Cheema

Civil society sits in a dual role: it is both a user of artificial intelligence tools and an advocate for responsible AI governance. This gives it a unique opportunity to lead by example when it implements these new technologies. As organizations turn to AI to speed up workflows, from analyzing policy documents to generating campaign content, they also confront the ethical risks that come from these systems: bias, opaque decision-making, and the potential to reinforce structural inequalities like gender and racial discrimination.

Civil society groups increasingly use AI tools to boost efficiency in policy, advocacy, and communications. However, this accelerated adoption may be outpacing thoughtful understanding of the tradeoffs. Advocates now face urgent questions: 

Does the pursuit of efficiency undermine trust and depth? 

How do civil society organizations (CSOs) address the stark ethical and equity concerns surrounding AI implementation in advocacy work?

Civil society plays a crucial role in shaping public discourse, especially around justice, rights, and equity, and there is a growing consensus that it must lead by example in modeling ethical AI use while also pushing for equitable governance frameworks.

Practical uses of AI in Civil Society Organizations

Document Analysis: Making Sense of Complex Information

Many civil society organizations no longer rely solely on manual analysis of documents and meeting transcripts. These organizations report comfort with using AI for reading, summarization, and document analysis, tools that can reduce the burden on small or under-resourced teams.

AI helps organizations process:

  • Legislative proposals and regulatory documents
  • Meeting transcripts and public hearing records
  • Research papers and technical reports
  • Stakeholder position papers and public comments

Policy teams are also using AI tools to simulate complex areas like environmental or economic reform. Large language models (LLMs) can generate scenarios to evaluate the potential impacts of proposed policies, helping to identify both intended and unintended consequences. This approach supports more informed advocacy and policy development by enabling teams to assess the effectiveness of interventions before they are implemented.

Monitoring Public Sentiment

Tools like Meltwater, Cision, or Talkwalker use natural language processing (NLP) to monitor news cycles, track public sentiment, and detect emerging issues. These tools have made real-time media monitoring faster, more scalable, and more accessible to smaller organizations, which can now identify opportunities for intervention. 

But algorithmic misinterpretation remains a risk, especially when issues involve marginalized communities or nuanced language. Without human review, these errors can go unchecked.

The Centre for Democracy and Technology's (CDT) report on AI & Machine Learning explains that algorithms often replicate past biases and have the potential to perpetuate racial, economic, and social prejudices.

Boosting Advocacy and Raising the Stakes

AI now plays a growing role in civil society advocacy: from analyzing complex policy proposals to generating initial response frameworks, scenario planning, and outcome forecasting. AI can aid in developing personalized campaigns and advocacy messages, and translation tools have removed significant barriers to cross-border collaboration. However, as the Ada Lovelace Institute notes, these tools require careful review to avoid cultural mistranslations on sensitive topics. 

AI Disruption in Civil Society Organizations

Advocates for ethical AI believe it should serve the common good, but that raises an important question: Who decides what "good" looks like? Civil society leaders want to answer critical questions: who is accountable for content produced by AI, and what safeguards are needed to ensure these tools uphold human rights?

These questions are especially relevant in the context of coalition building and public engagement, where organizations must communicate complex policy positions to diverse audiences while representing the interests of the communities they serve.

But deciding what "good AI" looks like isn't happening on equal ground. The people and institutions with the most resources, infrastructure, and technical fluency are often the ones shaping the narrative, building the tools, and setting the norms. When smaller or under-resourced organizations can't access these same tools or influence how they're developed, they risk being excluded from the decisions that matter most.

The growing "AI capability gap" between well-resourced international NGOs and smaller community-based organizations threatens to further concentrate power among already privileged institutions. Providing access to AI training and literacy can help close the gap, but the ongoing funding crises, especially in the Global South, make it difficult for many organizations to keep up. Rachel Adams, the CEO of the Global Centre on AI Governance and author of The New Empire of AI: The Future of Global Inequality, notes that the cost of closing the skill gap may be too high for struggling economies to bear.

If civil society is to remain a credible advocate for equity and rights-based AI governance, it must also embody those principles in how it adopts and applies these technologies. Modeling inclusive, transparent AI use isn’t just a best practice; it’s an imperative. 

What Now? Guidance for Civil Society Organizations

As mentioned earlier, the differences and capability gaps between civil society organizations require adopting tailored approaches to ensure equity. The widening digital and AI adoption divide risks excluding marginalized groups and under-resourced communities, reinforcing the power imbalances many CSOs seek to challenge. 

I recommend CSOs implement the following strategies to bridge the capability gap while maintaining their advocacy integrity:

  • Start small: If you're an organization facing resource or skills barriers, begin with low-risk, clearly scoped AI use cases like summarization or translation. Join coalitions or peer networks to share tools, training, and support, especially if resources are limited.
  • Establish clear usage policies: Organizations should develop guidelines that define when and how AI tools can be used, outlining acceptable use cases, approval processes, and accountability structures aligned with organizational values.
  • Develop an internal AI integration policy: Establish a clear, values-aligned framework for how AI tools are adopted and governed across the organization. This should include ethical safeguards, oversight, and alignment with community needs and values to ensure AI reinforces rather than undermines the organization’s mission.
  • Prioritize human oversight: AI should support, not replace professional judgment and community voices. Whether drafting advocacy materials, monitoring sentiment, or responding to policy developments, human review remains critical.
  • Invest in staff training: Teams need practical, ongoing training in the ethical and strategic use of AI. This includes understanding the limitations of current tools, recognizing bias, and knowing when to escalate to a human.
  • Disclosure and transparency: Advocates should proactively discuss AI usage with coalition partners and stakeholders, especially in contexts where synthetic content is involved.

As AI adoption accelerates, civil society must approach these tools with both openness to innovation and a critical eye. The real question isn’t just how CSOs can use AI more effectively, but how they can use it in ways that value democracy and human dignity. How will civil society not only adopt AI ethically in its own work but also step up to shape AI governance systems that reflect our highest ideals rather than merely our efficiency imperatives.

💡
Please forward and share!

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe