Big Tech Redefined the Open Internet to Serve Its Own Interests

Big Tech companies have redefined terms like “openness” and “free expression” to support business models built on centralization and data monetization.

Big Tech Redefined the Open Internet to Serve Its Own Interests
Photo by Waldemar / Unsplash

By Burcu Kilic and Mallory Knodel

What if the internet as you know it is already gone? “Open” was once a public interest principle at the heart of the internet’s design. But Big Tech companies have managed to redefine terms like “openness” and “free expression” to support business models built on centralization and data monetization. What was once a decentralized system designed for interoperability and equal access, the internet is now a collection of closed, tightly controlled platforms. The open internet used to allow different parts of the network to work together easily, with openly shared information and transparent behavior.

Today, the “wide internet” is effectively gone; true openness and actual interoperability have all but disappeared. Internet providers limit connection, hide systems behind proprietary walls, and treat people and society as data resources to extract. Reclaiming the internet’s original purpose will mean challenging Big Tech’s use of public interest language to justify enclosure. Protecting the open internet requires aggressive enforcement of anti-monopoly regulation. And it will require the creation of new policies that support true openness and user freedom.

When “Open” Stopped Meaning Open

The internet’s foundational ideals: openness, interoperability, and free expression once united civil society and tech innovators in opposition of incumbent and monopolistic telecommunications companies. Early champions of a free and open web aligned themselves with digital rights advocates to protect user choice, information access and civil liberties. But over time, these former innovators have co-opted that very language to entrench their own monopoly platforms.

Terms like “free expression” and the “free flow of information” used to push back against centralized telecom control, are now deployed by Big Tech to justify commercial surveillance, extractive data practices and an increasingly closed web. Civil society, too, has struggled to keep up. Groups that formerly acted as watch dogs have sometimes uncritically echoed the language and positions of powerful platforms, failing to question how openness is being selectively redefined.

Take, for instance, the recurring call for a “free and open internet” in public advocacy letters like those from ACLU, the Internet Society or Freedom House. These appeals rightly warn against state-driven censorship, data localization and discrimination against foreign services, yet the same vocabulary: free flows of data, non-discrimination, and openness, is frequently repurposed by Big Tech to resist regulations in the US and abroad intended to protect competition, privacy and public interest. Meta and Apple claim, for instance, that the EU’s Digital Markets Act threatens user privacy and security by forcing interoperability and limiting their ability to maintain “open” and “trusted” platform experiences.

This distortion becomes especially clear in the current debate over AI training. Large language model developers argue that scraping vast swaths of the internet is justified under the banner of “fair use” or the “right to research,” as seen in OpenAI’s response to the lawsuit from The New York Times which alleges that OpenAI and Microsoft used millions of copyrighted articles without permission to train their models, reproducing Times content verbatim or in close paraphrase without attribution or compensation. These aren’t neutral acts of innovation. They are commercial strategies, conducted by a handful of powerful firms, to build proprietary models with little oversight, minimal benefit to creators, and no meaningful consent.

The same companies that once claimed to democratize knowledge now argue that using copyrighted works, news archives, or public forums without permission is necessary to fuel AI development. In this framing, openness is no longer about interoperability or decentralization, it is about unregulated extraction. Meanwhile, independent creators and the public remain powerless in the face of sweeping commercial appropriation.

From Open Systems to Walled Gardens

The original design of the internet was radically open. It was built on a decentralized technical architecture: a mesh of hosts and routers connected through open protocols like TCP/IP, HTTP, and DNS. These protocols weren’t owned or controlled by any single company. They enabled systems to interoperate, communicate, and evolve without permission. Anyone could build on top of this foundation. As long as you followed the protocols, your website, service, or idea could participate in the network.

This openness fostered "permissionless" innovation and experimentation. It allowed communities to form across borders and supported countless small-scale projects that could grow organically. Transparency wasn’t just a principle, it was embedded in the system’s design. People could inspect how things worked, build their own versions, and contribute to the ecosystem. 

The rise of surveillance capitalism in the early 2010s marked a turning point. Google was the first to discover how user data could be monetized at scale through behavioral advertising with the acquisition of DoubleClick. It wasn’t long before Facebook followed, transforming itself into a data-harvesting machine. These companies built empires not on open infrastructure, but on extracting insights from user behavior to sell targeted ads. The web’s openness was no longer the point. It was an obstacle.

What emerged instead was platformization: a model where services restrict interoperability, control access through proprietary APIs, and lock users into closed ecosystems. The user isn’t a participant in a network, they’re a captive audience. App stores gatekeep software distribution. YouTube disables certain features unless you're logged in to a Google account. Google Search shows you less of the open web and more of its own products, like Maps, Shopping, or YouTube videos. Each platform draws a perimeter around itself, limiting how other services or users can interact with it.

Nowhere is this enclosure clearer than in what some call the “Google Web.” As Microsoft CEO Satya Nadella recently said in an antitrust hearing, Microsoft Bing can’t compete with Google, because the search engine’s market share is so large the internet has basically become the “Google web”. Google has positioned itself as the default interface to the internet. It controls what results people see, how data is prioritized, and even which standards get adopted. Developers design for Google. Content creators write for Google. Even the advertising industry relies on its tools and metrics.

Google’s infrastructure—from its login systems and analytics tools to its advertising stack and, increasingly, its AI APIs—has become deeply embedded in the fabric of the web. This integration gives Google the power to shape the behavior of users, businesses, and even regulators, all while cloaking its dominance in the language of open access and innovation.

Failing and False Solutions

This distortion of “openness” goes beyond internal platform decisions. It shapes the policies they promote and the narratives they use to defend them. Take for example, calls for “data free flows” in trade negotiations. These are often framed as defending openness, yet they protect the global reach of surveillance capitalism, enabling corporations in the global North to extract data from users and infrastructures in the global South under the banner of digital trade, reinforcing asymmetries in power, oversight, and economic benefit.

Even data protection laws like the GDPR, while important, have left dominant business models untouched. Data brokers continue to amass and sell personal information, while the online advertising industry remains rife with potential for abuse. By placing the burden on individuals to manage endless consent requests, the GDPR reinforces a system where large-scale data collection remains the norm. Rather than challenging the surveillance-based business models that drive the digital economy, it legitimizes them through a veneer of user choice.

Meanwhile, some of the most important decisions about digital policy, especially around AI and data governance, are being made behind closed doors with minimal public input and maximum corporate influence. Civil society groups and independent researchers are often excluded in favor of industry stakeholders. In the European Union, recent moves to delay the Digital Services Act, drop the AI Liability Directive, and carve out security exemptions from the AI Act have taken place with little transparency or public debate. These shifts reflect growing pressure from tech industry lobbyists and geopolitical concerns rather than democratic consultation.

Real Alternatives for Reclaiming Openness

Reclaiming an open internet means confronting the systems that allow corporate concentration and unchecked data extraction. Policy must go beyond symbolic gestures to drive structural change.

Antitrust enforcement should focus not just on harm, but on reducing market dominance that limits competition and innovation. Data minimization should be a core principle, limiting what data can be collected in the first place. Stronger enforcement tools like private right of action, class actions and tort-based remedies can help shift power back toward users and civil society.

We also need public investment in digital infrastructure. That includes expanding the number of internet exchange points, investing in alternative AI models such as The Swiss AI Initiative—an effort by Switzerland to release a large language model developed on public infrastructure, and supporting startups that build around open standards. True interoperability must be enforced, not optional.

Finally, we need to restore digital civic spaces, like the Fediverse, where people can communicate, organize, and create without being surveilled or monetized. An open internet should be a public good, governed with transparency, participation, and accountability.

Conclusion

The real threat to an open internet is not trade barriers  or state censorship. It is Big Tech. These companies have redefined “openness” to justify their own dominance, turning the internet into a system of surveillance and control. Trade policy and deregulation have reinforced this shift, protecting business models built on data extraction rather than public benefit.

To reclaim the internet’s original purpose, we need to rethink what openness really means. It should not be about unrestricted data flows for corporate use, but about enabling people to connect, create, and communicate freely and safely.

This will take coordinated action across local, national, and global levels. Regulators must challenge concentrated power, not accommodate it. And policymakers must put the public interest ahead of corporate influence. The internet still has the potential to serve as a space for democratic participation and shared progress. But that will only happen if we fight to make it serve people, not platforms.


Publication Spotlight: Tech Accountability in Focus

Tech Accountability in Focus is a bulletin from the Business & Human Rights Resource Centre that brings you the latest on tech accountability three times a year. Each edition covers the biggest human rights stories linked to the sector, regulatory developments, investor updates, and insights from direct engagement with companies facing allegations of abuse.

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

Open Social Web

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

  • Protocols for Publishers is a series that brings together publishers, developers, and researchers to explore open protocols for a sustainable agentic web. ​Join them for an evening of presentations and open discussion, where protocol builders and publishers share their thinking about the future of the web. August 20, 6:00 EDT. New York, NY.  https://lu.ma/gb050dwt 
  • Digital Rights Asia-Pacific Assembly 2025 (DRAPAC25) will bring together activists, technologists, and advocates to confront rising digital authoritarianism. August 26-27. Kuala Lumpur, Malaysia. https://www.apc.org/en/event/digital-rights-asia-pacific-assembly-2025 
  • The 2025 Team CommUNITY Global Gathering (GG) will bring together participants from over 140 countries to focus on grassroots tech innovation, circumvention tools, secure communication, and organizational sustainability. September 8-10. Estoril, Portugal. https://www.apc.org/en/event/team-community-global-gathering 
  • Forum on Internet Freedom in Africa 2025 (#FIFAfrica25) brings together diverse stakeholders from Africa and beyond to explore gaps and opportunities in advancing privacy, free expression, inclusion, civic participation, and innovation online. September 24-26. Windhoek, Namibia. https://internetfreedom.africa 

Careers and Funding Opportunities

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share!

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe