News

Jul 03

2025

By

||

||

1 Like

||

Where Government and Big Tech Clash in the Global South: A blog post

Introduction: The Rise of Digital Governance

Digital platforms have evolved into modern public spheres, wielding unprecedented influence over global discourse and information flow. As these virtual communities expand in scale and reach, they increasingly blur the boundaries between private enterprise and public governance. This shift has positioned private corporations as de facto global regulators, raising critical questions about public interest, accountability, and regulatory oversight.

While digital platforms are privately owned, their vast user networks function as public arenas of interaction. Industry self-regulation has effectively conferred upon big tech firms a quasi-governmental role, allowing them to shape societal norms, control access to information, and dictate digital interactions through proprietary policies and algorithms. The dominance of these corporations has created a governance paradox: private companies now perform regulatory functions traditionally reserved for governments, setting rules, enforcing compliance, and mediating online interactions—often with limited transparency or democratic oversight.

The governance of digital platforms is shaped by a complex interplay between states and corporations. While governments seek to establish legal frameworks, tech companies have historically set the terms of engagement—dictating how users interact, what information circulates, and how power is distributed online. The rapid evolution of technology and business models has further entrenched their influence, giving them the ability to design, implement, and enforce private regulatory regimes that often supersede national laws.

The concentration of economic power within a small group of dominant tech firms reflects the logic of surveillance capitalism, where data collection and algorithmic control drive market dominance. This oligopolistic structure grants major platforms an unprecedented ability to shape digital norms in alignment with their private interests. As media scholar Tarleton Gillespie aptly notes, “While part of the question must be how platforms are governed, an equally important question is how platforms govern.”

This fact sheet examines the growing tensions between governments and big tech, analysing key areas of conflict and diverse approaches to platform governance. As digital platforms continue to expand their influence, the challenge remains: How can governance structures evolve to ensure accountability, transparency, and the protection of public interests in an increasingly privatised digital landscape?

 

Historical Context

The early internet was defined by its decentralised architecture, presenting significant regulatory challenges for governments. However, the notion of an unregulated cyberspace has rapidly faded. As legal scholar Lawrence Lessig observed, the era of an entirely open and ungoverned internet belongs to the past. Over the 21st century, both states and technology companies have worked to assert control over cyberspace—though through distinct mechanisms and for divergent purposes.

For governments, the internet has increasingly been recognised as a domain that, like physical territories, can and must be regulated. Policymaking in this sphere is often driven by influential figures within government, from heads of state and cabinet ministers to key policymakers and advisors. High-profile interventions, such as Emmanuel Macron’s address at the Paris Internet Governance Forum or Donald Trump’s executive order targeting social media platforms, underscore the role of political leadership in shaping platform governance. Similarly, influential legislators—such as Damian Collins in the UK or Heiko Maas in Germany—have played pivotal roles in defining national and regional regulatory approaches.

Beyond governments, a broader network of actors influences the evolving governance landscape. Civil society organisations, journalists, academics, activists, and even platform users contribute to discussions on regulation and digital rights. However, while various stakeholders participate in this ecosystem, states and intergovernmental organisations remain the primary drivers of formal regulatory frameworks. Their efforts, however, have emerged only recently and vary significantly across jurisdictions.

Democratic and authoritarian governments have adopted markedly different approaches to platform governance, reflecting the broader divergence in the interpretation and implementation of human rights in the digital space. While democratic nations tend to emphasise user rights and corporate accountability, more authoritarian regimes leverage regulation to reinforce state control over online discourse. Despite frequent references to global governance, regulatory frameworks remain deeply regionalised, shaped predominantly by North American, European, and Western legal traditions, while countries like China and Russia pursue distinct, state-centric models of control.

As cyberspace continues to evolve, both states and private corporations will remain central to its governance. Whether through legislation, corporate policies, or algorithmic enforcement, the digital domain is increasingly “tamed”—not by a single authority, but through a complex and often contested interplay of public and private power.

Decoding the Clash: Key Concepts and Areas of Conflict

Understanding the current landscape of platform governance requires a clear distinction between key terms such as platform governance, platform responsibility, and platform regulation. These concepts, while often used interchangeably, capture different dimensions of the legal, political, and technical frameworks that shape digital platforms and their role in society. As online intermediaries have become deeply intertwined with global issues ranging from climate change to human rights, their regulatory and ethical responsibilities have come under increasing scrutiny.

One of the most contentious aspects of platform governance is content moderation. Once seen as a technical function, it is now at the centre of political and policy debates. The decisions platforms make about whether to remove, amplify, or suppress content have far-reaching consequences, particularly when they involve high-profile users. However, moderation is about more than just policy enforcement—it involves a complex set of rules, architectures, and norms that structure online behaviour. As legal scholar James Grimmelmann notes, content moderation should be understood as a mechanism for shaping digital communities, fostering cooperation, and preventing abuse.

The power dynamics of the platform ecosystem further complicate this landscape. A small group of dominant corporations wields disproportionate influence over online interactions, effectively creating an oligopoly. This concentration of power allows big tech firms to shape not only the user experience but also the broader regulatory environment. Their ability to set rules, enforce compliance, and control digital interactions grants them significant leverage in global governance—often operating beyond the reach of traditional legal frameworks.

At the same time, states and intergovernmental organisations play a crucial role in shaping platform governance, but their approaches vary widely. Traditional regulatory mechanisms, such as command-and-control policies, have long been used to hold corporations accountable through legal and financial penalties. However, regulatory efforts face significant challenges, including industry lobbying, jurisdictional complexity, and enforcement difficulties. The approaches taken by governments can be broadly categorized into three models: the liberal approach, exemplified by the United States, which prioritizes market-driven solutions; the more conservative European approach, which seeks a balance between corporate freedom and regulatory oversight; and the interventionist model, adopted by authoritarian states, where platforms are tightly controlled to serve state interests.

Finally, global asymmetries further contribute to a fragmented and unequal platform governance landscape. Powerful states and economic blocs, such as the European Union, China, and the United States, have the leverage to regulate transnational tech firms, often forcing them to comply with local laws. In contrast, weaker states, particularly in parts of Africa, South America, and Asia, struggle to assert regulatory control, leaving them vulnerable to the influence of foreign corporations. As a result, platform governance is not truly global but remains shaped by regional power imbalances, leading to an uneven and often dysfunctional regulatory environment.

The ongoing struggle to define and enforce governance in the digital space reflects broader tensions between corporate interests, state authority, and public accountability. As platforms continue to shape global discourse, the challenge remains: who should govern the digital world, and how can governance structures ensure fairness, transparency, and democratic accountability?

The Public Interest in Digital Platforms

At the heart of the platform governance debate lies a fundamental question: who defines the public interest in digital spaces? What does the public interest of virtual communities truly entail? While the internet was once envisioned as a global public domain, the reality is far more fragmented. Normative frameworks governing digital platforms are shaped inconsistently by both state actors and private corporations, leading to a landscape where public interest is neither universally defined nor uniformly protected.

As explored in the previous section, the regulatory approaches adopted by different states reflect varying conceptions of public interest. The United States, with its market-driven philosophy, has historically resisted government intervention, allowing Silicon Valley giants to operate with minimal oversight—an environment that fueled their rapid expansion. In contrast, European authorities have taken a more regulatory approach, seeking to balance innovation with consumer protections in areas such as data privacy, competition, and user rights. Meanwhile, authoritarian states have embraced an interventionist model, where the concept of public interest is defined by state authorities, and digital platforms must comply with strict government controls over content, data, and user activity.

The disparities in regulatory approaches are further compounded by the global imbalance of power between states. Wealthy and influential governments—such as those in the EU, the US, China, and other major economies—possess the leverage to impose regulations on transnational tech firms. In contrast, many smaller or less economically powerful nations lack the institutional capacity to regulate big tech effectively. As a result, digital governance remains highly asymmetrical, with weaker states often ceding control to foreign corporations that dominate their digital ecosystems. This dynamic perpetuates a fragmented and uneven global regulatory landscape.

The immense power of private corporations in shaping public discourse has become increasingly evident. The restrictions imposed on Donald Trump’s social media accounts, for instance, underscored the ability of a few tech companies to unilaterally determine the boundaries of political speech. Similarly, key individuals within both governments and tech firms play outsized roles in influencing platform policies—whether through executive orders, regulatory decisions, or internal company policies.

At the same time, the challenge of passing and enforcing platform regulations remains formidable. The lobbying power of tech giants often slows or weakens legislative efforts, while jurisdictional complexities make it difficult to hold multinational corporations accountable. Even when regulations are enacted, ensuring compliance across borders presents ongoing challenges, further reinforcing the dominance of a few powerful entities in shaping the digital landscape.

As state intervention in digital governance increases and global internet freedom declines, the question remains: how can governance structures evolve to ensure that the public interest—however defined—is effectively safeguarded in the digital age? The answer will likely depend on the continued interplay of political power, corporate influence, and the evolving role of civil society in shaping the future of online spaces.

Regulatory Flashpoints: Key Clashes Between Big Tech and Governments

India and X

Elon Musk’s X Corp, formerly known as Twitter, is challenging the Indian government over content regulation and censorship, marking yet another high-profile clash between tech giants and national governments. The lawsuit, filed in the Karnataka High Court, argues that Delhi has been issuing arbitrary and inconsistent takedown orders, compelling the platform to remove content without clear legal justification. At the heart of the dispute is India’s evolving regulatory framework for digital platforms, which X Corp contends is being misapplied to suppress online expression.

A key point of contention is the Indian government’s Sahyog portal, which X Corp has labeled a “censorship portal.” This system allows a wide range of government agencies—from federal ministries to local police stations—to issue content-blocking orders using a standardized template. X argues that such broad authority risks excessive and opaque censorship, undermining both user rights and platform integrity.

The legal challenge centers on Section 69A of the Information Technology (IT) Act, 2000, which grants the government the power to block access to online content for reasons related to national security, public order, and sovereignty. While the Shreya Singhal vs. Union of India (2015) ruling established safeguards against misuse, X Corp argues that authorities are bypassing these protections. The platform also disputes the government’s invocation of Section 79(3)(b), which removes intermediary liability protections if platforms fail to comply with takedown requests. X contends that blocking orders should strictly adhere to Section 69A’s procedural framework rather than being enforced through broader intermediary liability provisions.

Musk’s lawsuit underscores the growing friction between global digital platforms and India’s increasingly assertive regulatory stance. In recent years, Delhi has introduced stringent laws targeting social media companies, even threatening criminal liability for noncompliance. This battle is not the first between X and Indian authorities—prior to Musk’s acquisition, the Karnataka High Court fined Twitter for failing to adhere to previous takedown orders. As India continues to position itself as a digital superpower, this case will serve as a critical test of how far governments can go in controlling online speech—and how forcefully global tech firms will resist.

Nigerian Government vs. Twitter (Now X)

The Nigerian government’s clash with Twitter (now X) in 2021 was a defining moment in the global debate over platform governance and state sovereignty in the digital space. The conflict erupted after Twitter deleted a post by President Muhammadu Buhari on June 1, 2021, in which he warned regional separatists of a potential crackdown, stating it would be done “in the language they understand.” Twitter removed the tweet for violating its policies, though it provided no further details, sparking outrage within the Nigerian government.

Nigeria’s Information Minister, Lai Mohammed, accused Twitter of double standards, arguing that the platform had failed to take similar action against inciting tweets from other groups. This sentiment was amplified by the government’s longstanding concerns about Twitter’s role in amplifying domestic political movements—particularly the End SARS protests, which gained momentum in 2020 after generating over 48 million tweets in just ten days. The administration had previously considered regulating social media, citing Twitter’s influence in mobilizing dissent and opposition.

In response to Twitter’s takedown of Buhari’s tweet, the Nigerian government banned the platform nationwide on June 4, 2021. Minister Mohammed justified the decision by arguing that Twitter was being used to undermine Nigeria’s stability. The government later outlined conditions for lifting the ban, including requiring Twitter to obtain a local license, register with Nigerian authorities, and comply with national broadcasting regulations to ensure that its platform was not used for activities that threatened the country’s unity.

By late June 2021, Twitter expressed willingness to negotiate with Nigerian authorities, and talks formally began in July 2021. The ban marked a turning point in Nigeria’s approach to digital governance, reinforcing concerns about the growing trend of governments asserting control over online platforms. It also underscored the delicate balance between national sovereignty, corporate responsibility, and digital freedoms—an issue that continues to shape global platform governance debates today.

Brazil vs. Twitter (Now X)

The clash between Brazil’s Supreme Federal Court and Twitter (now X) in 2024 was a defining moment in the global debate over government authority versus corporate autonomy in digital governance. The conflict began when Justice Alexandre de Moraes ordered a temporary nationwide ban on X from August 30 to October 8, 2024, citing the platform’s failure to comply with legal requirements. The decision came after Elon Musk refused to appoint a legal representative in Brazil, a move that led the court to freeze Starlink’s financial assets and impose substantial fines on the company.

The dispute revolved around content moderation and the legal obligations of tech companies operating within national borders. Brazil’s Supreme Court had previously mandated the removal of far-right accounts linked to the 2023 Brazilian Congress attack, arguing that they played a role in spreading misinformation and inciting unrest. However, X defied the order, reinstating the accounts in April 2024—an action that prompted the court to investigate Musk for obstruction of justice and incitement.

Brazil’s 2014 Internet Bill of Rights states that platforms are not liable for user-generated content unless they refuse to comply with a court-ordered removal request. X’s resistance to these takedown orders brought it into direct conflict with the law. On April 6, 2024, the company confirmed it had received an official removal order but publicly rejected the ruling, with Musk even encouraging users to bypass restrictions using VPNs. In response, Justice Moraes launched a criminal investigation into Musk, alleging his actions undermined the judiciary and fueled disinformation.

By August 17, 2024, the situation escalated further when Moraes threatened to arrest X’s legal representative in Brazil. In retaliation, X closed its offices in the country and withdrew its staff, deepening the standoff. On August 28, the Supreme Court issued an ultimatum, demanding that X appoint a local representative within 24 hours or face an indefinite ban. When the company failed to comply, authorities proceeded with the nationwide suspension.

Ultimately, after weeks of financial penalties and legal pressure, X conceded. The platform agreed to pay 28 million reais ($5.1 million) in fines and comply with Brazilian regulations, leading to the lifting of the ban on October 8, 2024.

Uganda vs. Meta:

In a move that underscored the growing tensions between governments and global tech platforms, Uganda banned Facebook (Meta) in January 2021, just days before its general elections. The decision came after Facebook removed several accounts linked to the ruling National Resistance Movement (NRM), alleging they were engaged in spreading misinformation and manipulating public discourse. This action provoked a sharp response from President Yoweri Museveni, who accused Facebook of political interference and bias against his administration.

Museveni swiftly condemned Facebook’s decision, accusing the platform of interfering in Uganda’s internal affairs and exhibiting bias against his administration. He framed the ban as a necessary retaliation for what he saw as an unwarranted and arrogant intervention by an American tech giant. According to Museveni, Facebook had no right to determine which voices should or should not be heard in Uganda.

The trigger for this escalation was Facebook’s takedown of pro-Museveni accounts, including those of prominent government supporters known locally as Bajjo and Full Figure. The Ugandan government had previously attempted to pressure Facebook into removing opposition-linked accounts it deemed a threat to national security, but the platform refused. This deepened tensions, reinforcing the government’s belief that Facebook was actively working against it.

While Museveni justified the ban as a measure to protect Uganda’s sovereignty, many observers saw it as a deliberate effort to suppress political opposition and limit the free flow of information during a highly contentious election period. The move was part of a broader pattern of state-controlled internet restrictions in Uganda, where authorities have frequently resorted to social media shutdowns and internet blackouts to maintain political dominance.

The Uganda-Meta standoff exemplifies the complex power struggle between governments seeking to control digital spaces and platforms asserting their role in moderating content and preventing disinformation. It also underscores the challenges of enforcing global platform governance in politically sensitive environments, where tech companies are often forced to navigate national sovereignty, free expression, and corporate responsibility in an increasingly fragmented digital landscape.

The Future of Platform Governance

As the debate over platform governance intensifies, the role of civil society emerges as a crucial yet underdeveloped force in shaping digital norms. While big tech companies continue to dominate decision-making in online spaces, their ability to serve the public interest has been repeatedly called into question. Civil society organizations, alongside democratic governments and other public actors, have the potential to enhance the legitimacy and accountability of platform governance. However, their influence remains constrained by the overwhelming power of private corporations. Moving forward, the growing push for greater transparency and public participation could lead to a shift in the governance landscape, particularly in the United States and the European Union, where regulatory momentum is gaining traction.

The regulatory landscape itself is undergoing rapid evolution. Increasing scrutiny from policymakers, legal challenges, and public pressure are forcing tech giants to reassess their governance structures. In both the US and the EU, new legislative initiatives seek to curb corporate dominance, ensure user rights, and impose stricter accountability measures. From the Digital Services Act in Europe to antitrust lawsuits in the United States, the coming years may witness a recalibration of power between governments, corporations, and civil society.

Ultimately, the future of platform governance will likely hinge on a more balanced approach—one that fosters collaboration between states, tech companies, and civil society. Striking the right balance between innovation, free speech, and public interest will be one of the defining challenges of digital governance in the years ahead. The question remains whether this evolving system can effectively safeguard democratic values while addressing the power asymmetries that continue to shape the digital world.

 

Notes

  1. The Shifting Definition of Platform Governance 
  2. The platform governance triangle: conceptualising the informal regulation of online content 
  3. https://nilepost.co.ug/opinions/245851/is-it-time-for-uganda-to-lift-the-ban-on-facebook-after-metas-new-policies 
  4. Twitter accused of censorship in India as it blocks Modi critics 
  5. Anti-social Media Bill (Nigeria) – Wikipedia 
  6. Brazilian Civil Rights Framework for the Internet – Wikipedia
  7.  Medrano, Maria (7 May 2015). “Brazil’s Internet Bill of Rights”. Americas Quarterly. Archived from the original on 10 September 2024. Retrieved 10 September 2024.
  8.  Kenya’s President Wades Into Meta Lawsuits | TIME 
  9.  Alvim, Leda (6 April 2024). “Musk Lifts Restrictions on X Accounts in Brazil in Challenge to Courts”. Bloomberg News. Archived from the original on 30 August 2024. Retrieved 30 August 2024.

 Mendonça, Yasmin Curzi de (9 September 2024). “Elon Musk’s feud with Brazilian judge is much more than a personal spat − it’s about national sovereignty, freedom of speech and the rule of law”. The Conversation. Retrieved 10 September 2024

Leave a Reply

Your email address will not be published. Required fields are marked *