AI, Power and Governance

AI, Power and Governance
Photo by Jeremy Thomas / Unsplash

This week, in our main story, Internet Exchange's Mallory Knodel examines how artificial intelligence is reshaping governance, arguing that while AI promises efficiency, it often deepens societal inequalities. She explores the risks of AI reinforcing the power of the privileged, while diminishing civil liberties for others, and calls for more inclusive, multistakeholder approaches to ensure technology serves the public good.

But first...

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

Internet Governance

Digital Rights

Technology and Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

Opportunities to Participate

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.


Shaping AI: How data redefines the obligations and responsibilities to our future

By Mallory Knodel

"AI is the future" reflects a belief that data will be a key driver of future developments and changes in society, the economy, and daily life. In that sense, AI is the past and AI is the present, too. It is perhaps far more interesting to outline the shape of our collective futures not in terms of what roles the technology will have, but in terms of how it proposes to change our own roles, responsibilities and relationships to a future society, future economies and our own future lives. The following essay begins by narrowing the scope of AI technology to its practical applications in society, and it ends with an analysis of obligations and responsibilities of modern governance in an AI future.

Introduction

In the 2021 book What We Owe Each Other: A New Social Contract for a Better Society author Minouche Shafik predicts that automation will not replace jobs but change them, adding “people who have skills that are complementary to robots will fare the best. Those complementary skills include things like creativity, emotional intelligence and an ability to work with people.” If automation is to have this impact on individuals, this leads us to wonder, what will automation do to change government and governance?

Even at its best, automation cannot replace governmental institutions anymore than it could replace human creativity and emotional intelligence. At its most mediocre and prone to failure is the AI whose purpose is to replace government functions in the name of austerity. An overall programme of governance, say extending digital identity to all citizens, requires national budgets to make an upfront investment, not expect a windfall.

At its worst, AI serves as a cover for power. Perceptual AI applications, wielded by the state, can conceivably create pretext and excuses for any manner of human rights abuses, just as we have seen from the private sector. AI that “generates” will always be error prone, especially for the most marginalised. These vectors for failure occur not just at the national level but globally, perpetuating neocolonial dominance. For example, when nearly all of Africa's internal internet traffic still transits through Europe, it creates a dependency that limits the continent's digital sovereignty and control over its own data, potentially exacerbating existing global inequalities and power imbalances.

Seeing is not knowing

AI encompasses a range of automation techniques, including computer vision and natural language processing. With the latest developments in machine learning, the latter has gained prominence in the era of big data and neural networks and is referred to as “generative AI,” explained in the next section. Machine learning grew from the need to manage large data systems, aimed at predicting financial investments in various complex systems like supply chains and environmental management, ostensibly with the aim to influence outcomes and take action. Today there is more data than ever and that is not a coincidence. Data is what fuels modern day automation. Modern ML systems leverage vast computational infrastructure to process extensive data, requiring ever more data to improve their models and have hastened datafication and digitalisation in every sector. However, AI has not developed without its discontents all along the way, as critiqued by scholars who have long pointed out the implications of increasing automation and computerization on society. These critiques highlight the human aspect of AI, particularly in design and supervision, where biases can be inadvertently introduced.

That governments are now turning to data-driven methods comes as no surprise. In the current landscape, datafication is central to value creation, with national strategies and budgets increasingly relying on AI's perceived societal benefits. The effectiveness of ML models largely depends on the quantity of data they process, operating under the assumption that more data leads to more accurate outputs. These models translate quantified phenomena of the world, aiming to approximate truth based on the volume of knowledge (data) they process. This perspective, influenced by epistemic cultures, assumes that a comprehensive understanding of the world can be achieved with sufficient data. Yet, the primary contribution of modern ML AI lies in its ability to discern patterns from vast unstructured data, with the improvement of its "sense-making" ability hinging on the availability of more data.

UN member states, failing to meet Sustainable Development Goals (SDGs), see AI as a potential savior, hoping it will aid in their development. However, this overlooks the trade-offs for citizens' rights and civil liberties. More unfortunately governments investing in AI solutions to sustainable development are also neglecting the people power that the SDGs actually require. High-level UN efforts are not only efforts to build capacity and create guardrails, but to give member states some political cover with governance for the provision of neocolonial multinational services (DPI). The biggest benefactors of more widespread capacity to collect civic data within well defined compliance parameters are the existing big tech companies, none of whom have established suitable human rights track records.

Without stronger institutions, a datafied and digitized approach to governance is merely a cost-effective derivative of the state administered by unaccountable, all-seeing megacorporations.

Generative is extractive

Generative artificial intelligence involves the extrapolation, interpolation, and completion of patterns, whether real or imagined, found in large datasets. The success or failure of generative AI, much like perceptive AI, largely depends on both the quantity and quality of the real-world data available to the AI model. This relationship underscores how generative AI is inherently extractive, involving data collection about individuals, group behavior, records, and various social and economic "signals," including texts, speech, music, images, and art.

Embedded within current generative AI tools, which have garnered significant attention, are complex intellectual and property rights issues. Additionally, generative AI's potential for misuse, particularly in ways that undermine trust in the information ecosystem, is a significant concern. Social issues such as bias and fairness in machine learning model decisions and actions pose technical challenges, especially when the available large datasets at best reflect our biased and unfair world.

These issues become particularly pertinent when AI is applied to governance. The widespread creation and dissemination of misinformation can erode trust in institutions, and efforts to control the flow of information risk becoming repressive. When states employ automation for tasks like delivering social services or managing civil rights processes, such as voting and identity verification, the inherent biases in generative AI call for enhanced transparency and accountability in governance mechanisms.

Governance obligations

Establishing a set of principles and a common language is crucial for AI governance, much as it has been for internet governance. Successful international cooperation in other domains, such as space exploration and environmental conservation, has drawn inspiration from the internet governance community’s model of collective, multistakeholder stewardship of shared resources. However, it is critical to acknowledge that applying the governance models of these relatively technical fields to the complex social, economic, and cultural debates surrounding automation could undermine human rights and civil liberties.

Governance, by its nature, requires setting norms through definitions and principles, yet many of the essential nuances in AI governance extend beyond mere technicalities. A key objective is to involve more countries from the global south in internet governance, a sector where enhanced cooperation has previously faltered. Although internet access remains a pressing concern, governments are now showing a renewed interest in investment, a change from the less engaged stance observed during the period of 2003-05. Focusing on the Sustainable Development Goals (SDGs) rather than solely on automation will likely bring a more diverse group of stakeholders into AI governance, compared to the current composition in internet governance.

It is important to recognize that AI in itself will not solve the challenge of rolling out internet connectivity. Corporate interests, driven by profit and market dependence, often hinder access expansion. The United Nations General Assembly (UNGA) assumes that current governance institutions cannot be developed to be more inclusive, effective, and efficient. This is a crucial issue, as there is no existing multilateral framework for digital cooperation and application for SDGs, despite efforts like the CSTD and STI Forum in New York. The introduction of AI has intensified these discussions, which were already contentious within the internet governance community. Governance over technology must be a collaborative effort across multiple stakeholders.

Transforming global governance mechanisms from multilateral to multistakeholder models is another significant goal. This approach would provide civil society and academics with a vital advisory role, functioning as a resource commission to assist industries and states, and could also offer support during emergencies. The scope of ambition for principles, objectives, actions, and actionable deliverables is yet to be defined, particularly in terms of how proposals might differ between the global south and the global north.

Domain and application-specific AI should be confined to its intended area of application. However, this limitation can significantly narrow market opportunities, especially in developing countries, where data exploitation is a major concern. Issues of trustworthiness, authenticity, and content provenance have limited application scope, and schemes requiring data sorting into verified/trusted and unverified/untrusted categories are unlikely to succeed.

Shared responsibility

The stellar achievement of global internet governance has shown that a decentralized approach is necessary. For AI governance, multistakeholder governance envisions a world in which citizens have control over their data, granting permissions as needed, while governments responsibly manage civic duties, meet their social responsibilities with respect and security, and uphold their obligations to human rights.

The ability for an “AI revolution” to achieve Sustainable Development Goals (SDGs) is debated. Countries must not rely solely on AI for sustainable development; broader efforts and substantial investments in people power and institutions are required.

Various intergovernmental governance bodies, such as economic groupings, the WTO, WHO, and UNESCO, are essential in discussions about shared responsibility to govern our digital futures. Coordinating with these organizations maximizes resources and impacts, particularly in developing capacity-building frameworks for policymakers in every jurisdiction. However, there is a risk of "governance washing"—where consultation appears robust but occurs at the expense of meaningful action. Governance should proactively address potential harms, emphasizing the decision of "whether" to implement certain AI technologies, not just "how" to do so. Perhaps more immediately and likely is the outcome that without mandate or direct application, any high-level principles will simply have little impact.

At the country level, states, as the primary holders of human rights obligations, should play a central role in sustainable development and how AI technologies are implemented to assist. Exploring open-source approaches and high-level validations of best practices could be particularly beneficial. However, such strategies must be tailored to individual country contexts to avoid compromising human rights commitments.

The foundation of AI application in critical sectors must be a strong human rights framework. Without this, AI is likely to exacerbate, rather than alleviate, problems, especially in sectors already violating human rights. Concerns about privacy and security—including data breaches, misuse of AI for surveillance, and other vulnerabilities—must be addressed, as AI, like any tool, has the potential to violate human rights.

Conclusion: The future

Well-worn dystopian predictions of a world governed by AI are plentiful but rarely instructive. Some would argue our imperfect world of today is already assisted by digital technologies that are put to work to maintain inequality between the frictionless existence of the powerful, and the loss of civil liberties and human rights for the rest. The truth will arrive somewhere in the middle.

Technocratic rule of the social compact disempowers people, who are unable to hold accountable the privatized extensions of the state: algorithms that take decisions about their lives.

This is an untenable disempowerment, the fault line of which is economic inequality. Privileged classes will have few borders and retain their agency globally. Today average economic privilege means having one’s needs without wanting, however in the future these needs can only be met and delivered with digital technology, leaving the poor fully behind in a state that can no longer provide those in need with services that do not rely on digital technology.

Instead we could invest in technology that supports life on earth sustainably. We should be in partnership with technology in a way that enhances the generative capabilities of humans. We must demand of our governments that problem-solving tech solutions extend, not abdicate, their obligations to our shared social compact.

💡
Please forward and share!

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe