Introduction
The desire to control the message by governments continues unabated. According to Mariana Olaizoa Ronseblat, a policy advisor on technology and law at the NYU Stern Center for Business and Human Rights
“There is a growing international consensus that governments should take a more active role in overseeing digital platforms. As 2025 began, this was no longer a theoretical discussion: the past few years brought a surge of legislative action across major economies. The European Union’s Digital Services Act (DSA) and Digital Markets Act (DMA) are now in full force, transforming how major tech platforms are allowed to operate in Europe. Meanwhile, the United Kingdom, Ireland, Australia, and many other countries have passed robust online safety laws that are now entering the enforcement stage.”
The NYU Stern Center for Business and Human Rights undertook a global survey and analysis of what it calls online safety regulations entitled “Online Safety Regulations Around the World”. Some 26 laws in 19 jurisdictions are analysed to discern common themes to platform regulation.
In this final article in the series about controlling the message – a two-sided coin that considers how Governments control messaging systems and how the message is promulgated that such control is necessary – I shall consider the NYU survey.
First I shall consider the messaging that is put forth that justifies Internet or platform regulation.
Then I shall consider the ways in which such regulation may be achieved. There are four common approaches which have been teased out of the laws examined in the NYU discussion paper.
The Need for Online Content Control
There are five major reasons why Internet-based content should be controlled. None of these are new. Indeed some of the arguments echo those that were advanced when Trevor Rogers introduced the Technology Crimes Bill in 1995.
The first set of reasons involve the risks and harms associated with online content. The argument goes that online platforms can facilitate harmful behaviors such as cyber-harassment, compulsive usage, misinformation, and exposure to harmful content with consequential outcomes such as self-harm, terrorism, child exploitation. The argument is that regulation aims to mitigate these risks. The question becomes one of whether such regulation should be reactive – as is the case with New Zealand’s Harmful Digital Communications Act – or proactive – as was proposed in the Safer Online Services and Web Platforms proposals
The second category of reasons involves the protection of vulnerable groups. This argument holds that children and minors are particularly susceptible to online harms, necessitating safeguards like age-appropriate design and content filtering. This reasoning lies behind the proposed Social Media Users Age Restrictions Bill.
The third category is to limit or mitigate abuse. This is associated with the first reason involving risks and harms. The argument holds that platforms can be misused for illegal activities such as cyberbullying, image-based abuse, and incitement to violence. Content control helps prevent such abuses.
The fourth reason involves the promotion of accountability by platforms. This holds that platforms wield immense influence over society and individuals, often without sufficient oversight. Regulation ensures accountability for their actions and policies. Thus control of the messaging technology is seen as necessary to counter this problem
The final reason is a recognition that freedom of expression needs to be moderated. While protecting (or giving lip-service) to free speech, governments must also prevent harmful or illegal content from spreading unchecked. The problem becomes one of defining such content.
The Justification for Regulatory Structures
To address these needs a number of regulatory structures are proposed and justification for those structures are advanced based on a number of principles. To a certain degree these principles overlap with the justifications advanced in the preceding section.
The first consideration is that of Human Rights Standards. That proposes that regulations must align with international human rights law, particularly the principles of legality, legitimacy, necessity, and proportionality.
Freedom of expression is protected under the International Covenant on Civil and Political Rights (ICCPR), but governments can restrict speech to protect public order, health, or morals. In New Zealand this restriction is based on the principle of a justified limitation of the right of free speech in the New Zealand Bill of Rights Act.
The second consideration is that of accountability and transparency. We have already see that one of the reasons for regulation is to hold platforms accountable. One way of achieving this is by way of transparency requirements to ensure platforms disclose their operations, algorithms, and moderation practices. These in turn foster accountability and enable scrutiny by regulators, researchers, and the public.
The third justification (which is also a method which I shall discuss shortly) is that of Safety-by-Design. Design-based regulations focus on upstream harm prevention by mandating safer platform architectures and empowering users to customize their online experience.
Fourthly there is the justification of fairness and procedural safeguards. Procedural requirements ensure platforms operate fairly, live up to their terms of service, and provide mechanisms for users to report violations and appeal moderation decisions.
Fifth there is a requirement for global regulatory coherence. Regulatory fragmentation across jurisdictions complicates compliance for platforms operating internationally. Multilateral initiatives like the Global Online Safety Regulators Network (GOSRN) aim to enhance global coherence.
Sixth there should be evidence-based regulation. Highly prescriptive mandates should be based on empirical research to ensure effectiveness and proportionality. For example, regulating algorithmic recommendation systems should target content-neutral design aspects rather than content-dependent determinations.
The protection of vulnerable users (this overlaps with one of the categories of need above). Regulations often prioritize protections for children and minors, such as age assurance mechanisms and restrictions on harmful content targeting.
Thus, to sum up the argument in these two sections the messaging for Internet based content control argues that the need for online content control stems from the risks posed by unregulated platforms, while the regulatory structures aim to balance safety, accountability, and human rights compliance.
That statement in noting “unregulated platforms” sees the existence of such bodies as inherently wrong or potentially harmful, whereas regulation will address these problems. The suggestion that there may be some form of self-regulation does not enter into the equation.
Regulatory Approaches to Online Safety
There are four main regulatory models that are deployed for controlling the narrative that is present in online platforms.
The first model involves content-based requirements.
These involve establishing classes of prohibited content that platforms must remove.
These duties can be reactive (triggered by takedown orders or user reports) or proactive (requiring ongoing monitoring and removal). Variants include:
· Reactive or proactive obligations for illegal content only.
· Reactive or proactive obligations for illegal and harmful content.
· "Must-carry" provisions preventing platforms from removing certain content.
· Requirements for platforms to communicate specific content.
The problem lies in the definition of prohibited content. This can be either illegal (objectionable content under the Films Videos and Publications Classification Act) or harmful. In the context of the Harmful Digital Communications Act that is content which causes serious emotional distress. However, a “take down” order is not automatic and is made by a Court.
The second model is design-based requirements.
These mandate changes to platform architecture and features to prevent harm upstream. Examples include:
· Specific design features like privacy settings, push notifications, and algorithmic feeds.
· User customization tools and self-help features.
· Prohibition of manipulative designs ("dark patterns").
· General duty to implement features with user safety in mind.
The third model involves transparency requirements.
These compel platforms to disclose information about their operations, algorithms, moderation processes, and user data. Examples include:
· Transparency reports on content moderation.
· Algorithmic disclosures.
· Independent audits and researcher access to platform data.
Finally there are procedural requirements.
These focus on platform processes to ensure fairness and accountability. Examples include:
· Clear and accessible terms of service.
· Mechanisms to enforce terms of service.
· Points of contact and legal representatives.
· Risk and impact assessments to identify and mitigate human rights risks.
Each approach addresses different aspects of online safety regulation, aiming to balance user protection, platform accountability, and compliance with human rights standards.
Examples in Different Jurisdictions
Different jurisdictions adopt varied approaches to online safety regulations, reflecting their legal frameworks, priorities, and societal contexts. Below are someexamples of how jurisdictions approach online safety:
Content-based Requirements
· European Union: The Terrorist Content Online Regulation (TCOR) mandates hosting services to remove terrorist content within 1 hour of receiving an official order.
· Singapore: Platforms must block access to "egregious content" (e.g., self-harm, sexual violence, terrorism) upon orders from the Infocomm Media Development Authority (IMDA).
· Australia: Platforms must comply with removal notices for cyber abuse, child cyberbullying, and other harmful content issued by the eSafety Commissioner.
· Texas (USA): Platforms must proactively filter harmful content for minors using technology and human moderation.
Design-based Requirements
· United Kingdom: The Age Appropriate Design Code mandates default privacy settings for children, such as disabling geolocation tracking.
· California (USA): The Addiction Act prohibits sending notifications to minors during specific hours without parental consent.
· Singapore: Platforms must provide tools for users to manage safety, such as restricting visibility of harmful content and location sharing.
· Louisiana (USA): Prohibits direct messaging between adults and minors unless they are already connected.
Transparency Requirements
· European Union: The Digital Services Act (DSA) requires platforms to disclose algorithmic parameters, moderation practices, and provide researcher access to data.
· Texas (USA): Platforms must detail how algorithms rank, filter, and present content to minors.
· Singapore: Platforms must disclose safety features and provide local safety resources.
Procedural Requirements
· European Union: Platforms must publish clear terms of service, conduct risk assessments, and provide reporting and appeal mechanisms.
· United Kingdom: Platforms must specify how children are protected from harmful content in their terms of service.
· Australia: Platforms must provide accessible tools for users to report harmful content and ensure timely responses.
· New Zealand: Courts enforce takedown orders for harmful digital communications.
Global Fragmentation
It can be seen from the varied approaches set out above that although there are a number of regulatory models deployed there is no international standard.
Regulatory approaches vary significantly across jurisdictions, leading to challenges in global compliance and coherence. For example, what is considered harmful content in one country may not be illegal in another, complicating enforcement for platforms operating internationally.
These differences highlight the need for international cooperation to harmonize online safety regulations while respecting local contexts.
Improving Content Regulation
Although there are a number of approaches that are being deployed there is a hunger for greater regulatory control which is no more and no less that a greater appetite for controlling the narrative from online platforms.
Justification for Future Content Regulation
We have already examined existing arguments for narrative control that have been common since the 1990’s. Over the years the music remains the same but the lyrics to the song may have modified. In many respects, the justifications for future content and message regulation reflect existing calls.
These justifications can be set out as follows:
Protecting Freedom of Expression:
By focusing on explicitly illegal content and avoiding vague definitions of harmful content, future regulations can safeguard freedom of expression while addressing legitimate risks.
Preventing Overreach:
Vague or overly broad content regulation risks government overreach and suppression of legitimate speech. Clear standards ensure proportionality and necessity.
Accountability and Transparency:
Transparency requirements, such as independent audits and researcher access, ensure platforms are held accountable for their moderation practices and impacts.
Global Consistency:
Harmonized regulations reduce compliance burdens for platforms operating across borders and promote a safer, more coherent online environment.
Evidence-Based Regulation:
Future regulations should be grounded in empirical research to ensure effectiveness and proportionality, avoiding arbitrary or overly prescriptive mandates.
Human Rights Standards:
Aligning regulations with international human rights law ensures that restrictions on content are lawful, necessary, and proportionate, protecting users' rights.
Adapting to Platform Diversity:
Tailoring regulations to different types and sizes of platforms ensures fairness and avoids imposing unnecessary burdens on smaller or less risky services.
Future Content Regulation Models
If we take the above justifications it appears that there are seven possible avenues which can be followed in the future, some of which echo existing proposals or models. These avenues reflect the rationales advanced for control of content online. These avenues can be grouped as follows:
Focus on Explicitly Illegal Content
Regulations should target content that is explicitly illegal or meets the "legality" standard under international human rights law.
Governments should avoid requiring platforms to remove vaguely defined "harmful" or "awful but lawful" content, as this risks overreach and suppression of legitimate speech. This involves a narrowing of a broader model of speech or expression control and is probably one that would be palatable in those following the Western democratic tradition.
Proactive and Reasonable Measures:
Platforms should implement proactive measures to detect and remove illegal content, but these measures must be reasonable, respect data privacy rights, and avoid undermining encryption or introducing systemic weaknesses.
Transparency in Content Moderation:
Platforms should disclose detailed information about their content moderation systems, including metrics on enforcement actions and the impact of moderation policies.
Independent audits and researcher access to platform data should be mandated to ensure accountability.
Global Regulatory Coherence:
Regulators should work towards international cooperation to harmonize content regulation across jurisdictions, reducing fragmentation and compliance challenges for platforms operating globally.
Human Rights Compliance:
Content regulation must align with international human rights standards, ensuring restrictions are lawful, necessary, and proportionate.
Governments should avoid indirect bans on legitimate speech by requiring platforms to remove vaguely defined harmful content.
Tailored Approaches:
Regulations should differentiate between platforms based on their size, type, and risk profile, imposing stricter requirements on larger platforms with greater societal impact.
Multilateral Initiatives:
Participation in initiatives like the Global Online Safety Regulators Network (GOSRN) can help regulators share best practices and tools to enhance global coherence.
A Universal Standard for Content Regulation
As has been noted there is an issue regarding harmonising content control internationally. Different jurisdictions have different standards. However there are some possible solutions to the issue of universal standards for content regulation, emphasizing global coherence and alignment with human rights principles. These solutions reflect and repeat some of the approaches already discussed.
Alignment with International Human Rights Law:
Regulations should adhere to the principles of legality, legitimacy, necessity, and proportionality under the International Covenant on Civil and Political Rights (ICCPR).
Restrictions on content must be clearly defined in law, pursue legitimate aims (e.g., protecting public order, health, or morals), and be narrowly tailored to avoid overreach.
Focus on Explicitly Illegal Content:
Universal standards should target content that is explicitly illegal, avoiding vague definitions of harmful or undesirable content that could lead to overbroad enforcement.
Global Regulatory Coherence:
Regulators should work towards harmonizing online safety regulations across jurisdictions to reduce fragmentation and compliance challenges for platforms operating internationally.
Participation in multilateral initiatives like the Global Online Safety Regulators Network (GOSRN) is recommended to share best practices, tools, and experiences.
Transparency Requirements:
Universal standards should mandate platforms to disclose key information about their operations, algorithms, and moderation practices.
Independent audits and researcher access to platform data should be included to ensure accountability and foster global consistency.
Tailored Approaches:
Regulations should differentiate between platforms based on their size, type, and risk profile, imposing stricter requirements on larger platforms with greater societal impact.
Prohibition of Vague Content Mandates:
Universal standards should avoid requiring platforms to remove vaguely defined categories of harmful but lawful content, as this risks infringing on freedom of expression.
Safety-by-Design Principles:
Platforms should be required to implement design features that prioritize user safety and allow customization of the online experience.
Universal standards should incentivize platforms to test the safety of their design features before rollout.
International Cooperation:
Regulators should engage with civil society organizations, academic researchers, and affected communities to ensure that universal standards reflect diverse perspectives and real-world impacts.
A Rationale for International Co-operation
The problem with a common international approach is that of a partial surrender of sovereignty. In the fragmented world of the Twenty-Twenties this is an issue for many. So what justification is there for a regime of International Standards?
The following arguments may be advanced:
Consistency Across Jurisdictions:
A universal standard reduces regulatory fragmentation, making it easier for platforms to comply with rules across borders and ensuring consistent protections for users worldwide.
Human Rights Compliance:
Aligning regulations with international human rights law ensures that restrictions on content are lawful, necessary, and proportionate, safeguarding freedom of expression and privacy.
Accountability and Transparency:
Universal transparency requirements foster accountability and enable regulators, researchers, and the public to scrutinize platform practices effectively.
Global Collaboration:
Multilateral initiatives like GOSRN promote cooperation among regulators, enabling the sharing of best practices and tools to enhance global regulatory coherence.
Adaptability to Platform Diversity:
Tailored approaches ensure fairness by accounting for differences in platform size, type, and risk profile, avoiding unnecessary burdens on smaller or less risky services.
Evidence-Based Regulation:
Universal standards should be grounded in empirical research to ensure effectiveness and proportionality, avoiding arbitrary or overly prescriptive mandates.
Thus the proposals for universal standards aim to create a consistent, human rights-compliant framework for content regulation that balances safety, accountability, and freedom of expression across jurisdictions.
What About Bans?
The issue of bans has been conspicuous by its absence in this discussion with the exception of bans of illegal content.
Bans should be seen as a measure of last resort in online safety regulations. While outright bans are rare, some jurisdictions have provisions allowing for platform access restrictions under specific circumstances:
Blocking or Access Restriction Orders:
Some regulations empower enforcement authorities to block access to platforms that fail to comply with legal requirements or pose significant risks to users.
For example, Singapore and Fiji have provisions allowing courts or regulatory bodies to issue blocking orders for noncompliance.
Crisis Situations:
The EU’s Digital Services Act (DSA) allows the European Commission to require platforms to change their policies or restrict access during periods of crisis. However, these measures must be "strictly necessary, justified, and proportionate."
Noncompliance with Court Orders:
In jurisdictions like Fiji, courts can issue criminal penalties, including blocking access to platforms, if they fail to comply with specific orders.
National Security Concerns:
In the U.S., the Protecting Americans from Foreign Adversary Controlled Applications Act includes provisions to ban platforms like TikTok if they are deemed threats to national security. However, enforcement of this law has been delayed.
Justifying Bans
What justifications are advanced for bans?
The first argument advanced involves noncompliance with legal obligations. Platforms that repeatedly fail to meet regulatory requirements, such as removing illegal content or implementing adequate safety measures, may face access restrictions as a penalty.
Secondly there is the issue of the protection of public safety. Blocking platforms may be justified if they facilitate illegal activities, such as child exploitation, terrorism, or cyber abuse, and fail to take corrective action.
The third argument involves national security threats. Platforms controlled by foreign adversaries or those that pose risks to national security may be banned to protect citizens and critical infrastructure.
Fourth, there is the issue of crisis management. Temporary bans may be imposed during emergencies to prevent the spread of harmful content or misinformation that could exacerbate the crisis.
Accountability is a common theme in the regulatory discussion. Blocking access serves as a deterrent for platforms that disregard their obligations under online safety regulations. This is associated with non-compliance with legal obligations.
Platform bans should be used sparingly and only under strict conditions.
Bans must be narrowly tailored to address specific harms and avoid infringing on user rights. Blocking entire platforms is often considered disproportionate under international human rights law, as it restricts freedom of expression and access to information. Finally, sweeping bans should only be considered when all other enforcement measures have failed.
Conclusion
This has been a lengthy study which has considered a number of issues surrounding the control of messaging. To control the means of messaging requires a justification and that justification in itself is a form of messaging control. Within time, if the message is repeated often enough, it will become accepted and a part of an orthodox narrative that is complacent about control of the Internet as a medium of communication.
In the past the State was involved to a considerable degree in communications technology. Broadcasting in New Zealand was for many years owned and controlled by the State. One wonders whether today there would be public acceptance of the nationalisation of broadcasting or a similarly heavy-handed approach to control of the Internet. Certainly the messaging surrounding greater regulatory control of the Internet has continued and has intensified. Not only is content the target but the delivery systems – the platforms – have come in for scrutiny.
One wonders whether there is a large element of economic envy in the desire to control the platforms. Certainly the platforms seem to be the principal targets in the Australian approach to Internet regulatory activity.
This series has traced regulatory messaging and the control of the message by regulatory means over the years. It is by no means a comprehensive study. That would cover volumes. What I have tried to do is provide some examples and illustrations of approaches to Internet regulation and the way that it has been justified. Some of those efforts have fallen by the wayside. Perhaps the most comprehensive review of possible regulatory activity was the Law Commission study. Those recommendations went nowhere.
Perhaps the most interesting area of proposals for Internet control has been in the International arena and the harmonisation of international regulatory activity has been addressed in this final article. Governments have tried on one hand. International organisations and NGOs have tried on the other. Sure, there are technical standards which impose some element of regulation but they dictate how the Internet works rather than what it puts out.
In New Zealand the sweeping proposals of the Safer Online Services and Web Platforms discussion paper would, if put into practice, have extended State control of Internet content in a dramatic fashion, enabling censorship and the potential stifling of online content. Although those proposals are not being advanced, there is every likelihood that they will be revived at a later date and under a different Administration.
Finally I have considered in this piece some of the ways in which States have addressed Internet regulatory issues and have attempted to develop a number of themes around those activities.
But this is a continuing story. It has not ended by any means and it will go one. Hopefully this series of article may have helped to identify some of the issues and strategies that are deployed in controlling the message both in justifying message control and the tools and mechanics employed to control the message itself
Thank you for the series David, but the whole subject makes my head ache. I admire your fortitude!
Thank you.