The Holy Grail of Internet Content Control
Joining the Dots to Discern a Picture
Introduction
The Holy Grail of the first quarter of the twenty-first century is the regulation and control of Internet content. Governments the world over have long sought to master the circulation of information, but the global, decentralised character of the Internet has made that project both more tantalising and more elusive.
I have argued before that the surest path to control lies not in chasing content but in mastering the technology itself — the code, the protocols, the underlying architecture. Whoever commands the pipes and the platforms will, sooner or later, determine what may flow through them.
Yet this is a truth too subtle for the bureaucratic imagination. Most regulators prefer the tangible and immediate: the content, the visible symptom rather than the system. As McLuhan warned, content is the “piece of meat” that distracts the mind while the real power works elsewhere.
Still, content control has always been the visible expression of deeper impulses. From the Star Chamber to the Stationers’ Company, those in authority have sought to contain the unruly spread of words and ideas. The early history of the printing press is not only the story of technological revolution but also of the establishment’s attempt to domesticate it — licensing printers, banning seditious books, and burning pamphlets deemed dangerous to order or faith. The new medium had to be tamed.
That same instinct persists in every age that experiences a communications upheaval. The modern equivalents of the Star Chamber are committees of digital safety, policy reviews, and regulatory “consultations.”
The struggle is continuous, the methods adaptive, and the rhetoric invariably moral: safety, decency, harm-reduction, trust. The object, however, remains unchanged — the assertion of authority over what people may read, see, and share.
The latest phase in this centuries-long saga is New Zealand’s attempt to bring the sprawling, borderless domain of online communication within the reach of domestic regulation. The Broadcasting Standards Authority’s recent claim of jurisdiction over internet-based radio is merely the newest expression of that impulse. To understand how we arrived here, we must trace the incremental steps — the dots that, once joined, reveal the larger picture.
The Early Phase (2013–2018)
The first serious attempt to grapple with online regulation came from the Government’s 2011 request that the Law Commission examine how existing media regulation might adapt to the digital environment. The result was the 2013 report News Media Meets New Media, a thoughtful but ambitious proposal for a single, converged framework covering both traditional and online outlets. The idea was simple: the same standards of accuracy, fairness, and balance should apply whether the content appeared in a newspaper or on a website.
Before any legislation could follow, however, the industry pre-empted the move. It established the Online Media Standards Authority (OMSA), a voluntary body for digital publishers, which later merged with the long-standing Press Council to form the New Zealand Media Council. In essence, the sector opted for self-regulation, seeking to preserve editorial independence while forestalling statutory intervention.
Parallel to this, the Law Commission was asked to examine the rising phenomenon of cyber-bullying. Its recommendations led to the Harmful Digital Communications Act 2015 (HDCA) — a statute designed to curb harassment and abuse online, particularly on social media. The HDCA marked a new frontier: the first legislative incursion into the content of digital communications.
After the HDCA’s enactment, momentum paused. For several years, the machinery of regulation seemed content to rest. The Internet remained, by and large, self-governing — guided by platform policies and public norms rather than central authority.
But the calm was deceptive. The regulatory imagination was merely waiting for a catalyst, and that catalyst came in 2019.
2019: The Christchurch Catalyst
The relative calm that followed the Harmful Digital Communications Act ended abruptly in March 2019. The Christchurch mosque massacres shocked New Zealand and the world, and the horror of the event spilled instantly onto digital platforms.
Within hours, the gunman’s livestream and manifesto had spread globally. The technological architecture that allowed families to connect and citizens to speak directly had, in that moment, also enabled the circulation of evil.
The Government’s response was immediate and resolute. The instinct to control content — the ancient reflex to contain the uncontainable — re-emerged with a new moral intensity. Within weeks, Prime Minister Jacinda Ardern launched the Christchurch Call, a joint initiative with French President Emmanuel Macron aimed at eliminating terrorist and extremist content online. The Call was endorsed by sympathetic leaders such as Justin Trudeau and by major platforms anxious to demonstrate social responsibility.
For many, the Call represented an ethical imperative. For others, myself included, it marked the revival of a familiar pattern: the use of a tragedy to accelerate the long-deferred quest for control over digital communication.
Among those standing beside Ardern at Call meetings was Jordan Carter, then Chief Executive of Internet NZ. Internet NZ had been founded to preserve an “open and uncaptured Internet.” Yet following its alignment with the Call, its mission statement subtly changed.
The organisation added a commitment to promoting a “safe Internet,” signalling a philosophical shift — from a rights-based conception of openness to a relativistic notion of safety that implicitly licensed judgments about content. The word safe — benign on its face — became the rhetorical bridge between technical governance and moral regulation.
Domestically, the machinery of censorship moved quickly. Chief Censor David Shanks declared both the shooter’s manifesto and the livestream video “objectionable publications” under the Films, Videos, and Publications Classification Act 1993.
The decision was justified; the materials were vile. Yet the speed and scope of the action were unprecedented, extending the Office’s jurisdiction deep into the online sphere. That single decision created the precedent for later assertions of authority over digital content.
Thus, in 2019, the Grail was glimpsed again. The quest, dormant since 2015, was revived — and this time it was driven not by the slow deliberations of law reform but by moral urgency, international solidarity, and the politics of crisis.
Expansion and Enforcement (2021 – 2024)
1. Takedown Notices and the New Censorship Mechanism
The next decisive step came in February 2022 with amendments to the Films, Videos, and Publications Classification Act 1993. These amendments empowered the Department of Internal Affairs (DIA) — the enforcement arm of New Zealand’s censorship regime — to issue takedown notices compelling online content hosts to remove or block access to “objectionable” publications.
The definition of “objectionable” was broad and inherited from earlier moral panics: depictions of torture, sexual violence, child abuse, necrophilia, and degrading or dehumanising acts. In theory, the measure targeted only the most extreme material. In practice, it normalised the idea that the State could compel private platforms, domestic or foreign, to remove content from the global network.
Hosts were required to act “as soon as reasonably practicable,” facing civil penalties of up to NZD 200,000 for non-compliance. However, the enforcement power stopped at the border. Of the eight formal notices issued by early 2023, seven were ignored; all had been sent to overseas operators. The DIA could not compel compliance beyond New Zealand’s jurisdiction.
Faced with this impotence, officials relied instead on informal diplomacy — “trusted flagger” relationships with large platforms such as Facebook, YouTube, and Twitter. These private mechanisms, stricter than domestic law, became the practical instruments of enforcement. The State thus outsourced its censorship function to global corporations, aligning its policy interests with their proprietary moderation systems.
The Christchurch video remained the touchstone. The DIA and Internet Service Providers cooperated to blacklist websites hosting the footage. Facebook removed 1.5 million copies in the first twenty-four hours, yet variants re-emerged for months, even years. The episode revealed both the technical limits of deletion and the enduring temptation of control: the more elusive the content, the stronger the desire to suppress it.
2. Filtering the Network
Alongside takedown powers, the DIA operated the Digital Child Exploitation Filtering System (DCEFS), a hidden list of over 7,000 URLs blocked through a NetClean WhiteBox server. Introduced in 2009 to combat child-sexual-abuse material, it functioned quietly until proposals in 2020–21 sought to expand it. Those proposals met rare unanimity in opposition from technical experts, digital-rights advocates, and every political party except Labour. The filtering system survived but remained confined to its narrow remit — at least officially.
In substance, the DIA had become the State’s operational censor for the digital age. Its formal mandate was limited to “objectionable” material, but its informal role reached further: coordinating with platforms, shaping reporting protocols, and normalising the infrastructure of network-level control.
3. The Content Regulatory Review and the “Safer Online Services” Programme
After the Christchurch Call and the DIA’s new powers, the next logical step was a comprehensive review of the entire media-content regime. The Content Regulatory Review, launched in 2019, evolved into the Safer Online Services and Media Platforms project. Its stated aim was to modernise New Zealand’s “fragmented and outdated” system and to address the “harms” of online content.
The context was telling. The review was presented as one part of a suite of “safety” initiatives: the Christchurch Call, amendments to the Classification Act, and the Keep It Real Online campaign. Together they formed a coherent policy architecture centred on safety — a value whose vagueness made it politically irresistible and conceptually elastic.
In November 2019, while the review was gestating, the Broadcasting Standards Authority (BSA) published Application of the Broadcasting Act to Internet Content.
The paper asserted that the BSA’s jurisdiction extended to online radio and television. The claim was bold — some might say opportunistic — but consistent with the wider pattern: a slow, almost imperceptible extension of traditional regulatory logics into the online sphere.
I have discussed that document in my article “The Broadcasting Standards Authority and Jurisdiction”.
4. Research and Problem Definition (2021 – 2022)
When the project formally commenced in June 2021, the DIA assumed leadership. The Department commissioned two major academic studies from Victoria University of Wellington:
• Associate Professor Peter Thompson and Dr Michael Daubs examined international developments in regulating harmful content (July–November 2021).
• Professor Miriam Lips and Dr Elizabeth Eppel produced a conceptual framework of “online harm” (June 2021–September 2022).
The research framed the problem as threefold:
A fragmented system of overlapping complaint bodies;
Inadequate protection for children and consumers; and
A regulatory gap for social-media platforms.
This diagnosis was not wrong, but it was incomplete. It assumed that “harm” could be defined objectively and that regulation could remedy it without chilling expression. The analysis implicitly treated online communication as a domain of risk rather than of freedom.
5. The 2023 Proposal and Public Consultation
In June 2023, the DIA released the draft Safer Online Services and Media Platforms proposal for public consultation. It envisioned:
• A new independent regulator — widely expected to be an expanded BSA;
• Enforceable codes of practice developed with industry; and
• A threshold capturing any platform with more than 100,000 annual users or 25,000 account holders in New Zealand.
The rhetoric emphasised “platform regulation,” but the substance remained content control. The regulator would ensure that platforms kept users “safe,” balancing harm reduction against rights such as freedom of expression and the press.
Over 20,000 submissions were received by July 2023. The response was sharply divided. Campaigns led by the Free Speech Union and Voices for Freedom warned that the proposals threatened free expression and democratic accountability. Industry bodies such as InternetNZ, NZ On Air, NZ Tech, and major telecommunications firms expressed cautious support for the stated objectives.
Critics highlighted three flaws:
The DIA’s overreach and lack of clarity about purpose;
The futility of enforcing domestic law on global platforms; and
A troubling suggestion that content undermining “trust in public institutions” might be targeted — a formulation perilously close to viewpoint regulation.
The history of the project including the various documents, papers and Cabinet Papers can be found here.
6. Collapse and Retreat (2024)
By early 2024, the project was faltering. On 29 April 2024, the DIA released a summary report concluding the review. Internal Affairs Minister Brooke van Velden announced that the initiative was not a ministerial priority.
She observed, pointedly, that illegal content was already policed and that concepts such as “harm” and “emotional wellbeing” were subjective. The three-year project ended without legislation — a quiet burial of an ambitious but overreaching scheme.
Yet, as every student of bureaucracy knows, policy ideas rarely die; they hibernate. The Grail had not been lost — only set aside for the next expedition.
Aftershocks and Revival (2024 – 2025)
The embers of the abandoned review soon glowed again. In June 2024, the DIA upgraded its DCEFS web filter by integrating the Internet Watch Foundation database, expanding the number of blocked URLs from 700 to around 30,000 daily. The new system, updated by artificial intelligence, was justified as a child-protection measure. In practice, it normalised algorithmic, real-time filtering at the national level.
In May 2025, National MP Catherine Wedd introduced the Social Media Age-Restricted Users Bill, modelled on Australian law. It would require platforms to verify that users were at least 16 years old before access.
The proposal, supported by several advocacy groups and cautiously by the Prime Minister, Christopher Luxon, extended the logic of State intervention from content to access. The Education and Workforce Committee began hearings later that year. If enacted, it would mark the next tightening of the regulatory web — “Safer Online Services Lite,” as I called it.
Meanwhile, the Government launched a Media Reform Consultation proposing new requirements for local-content prominence on smart TVs, quotas for New Zealand programming, and increased accessibility standards.
Buried within the February 2025 Discussion Document was the most consequential idea: the modernisation of professional-media regulation.
The proposal acknowledged that the broadcasting standards regime, created in the late 1980s, no longer matched the realities of digital consumption. It suggested extending the Broadcasting Act framework to cover all professional media, with the BSA (or its successor) shifting from complaint resolution to ensuring “positive system-level outcomes.” In plain terms, this meant a centralised regulator for all content that resembled journalism or broadcasting — regardless of medium.
The proposals contain the following interesting comments:
“The Broadcasting Act established the broadcasting standards regime. This includes programme standards (including classifications) and codes of practice, processes for making and dealing with audience complaints about broadcast content, and the Broadcasting Standards Authority (the BSA) to oversee the regime independently from government. The regime also requires radio and TV broadcasters with more than $500,000 annual revenue to pay a levy to support the BSA’s operations.
In the late 1980s, the broadcasting standards regime was designed to help ensure media content met accepted industry principles and reflected community values. However, as the regime is framed around broadcasting, it only covers linear TV and radio content – which New Zealanders are engaging with less and less as online and streaming platforms become increasingly the source of choice for media content.”
The document goes on to state:
“The proposal is to modernise the broadcasting standards regime to cover all professional media operating in New Zealand, not just broadcasters. The role of the regulator (currently performed by the BSA) would be revised, with more of a focus on ensuring positive system-level outcomes and less of a role in resolving audience complaints about media content.”
It would seem that the BSA has overlooked this document and its contents which can be found here. The full Discussion Document which was released in February 2025 can be found here
Submissions closed later in 2025; the matter now rests with Minister Paul Goldsmith.
The BSA, however, did not wait for political direction. It moved first.
The Latest Iteration and the Pattern Revealed
The BSA’s recent assumption of jurisdiction over internet-based platforms is not an isolated development. It is the latest dot in a pattern stretching back more than a decade — a continuation of the 2019 claim that online “radio” and “TV” fall within its remit. The timing, so close to the 2025 discussion document, is unlikely to be coincidence.
Perhaps the BSA is attempting to hasten governmental decision-making by creating a fait accompli. Or perhaps it is simply following the trajectory established since 2019, re-animating old policy under the guise of interpretation.
The decision, labelled “preliminary” and “interlocutory,” nonetheless carries symbolic weight: it cloaks a political aspiration in quasi-judicial authority.
Were I to suggest that this forms part of an orchestrated plan to extend regulatory control over Internet content, some would call me a conspiracy theorist.
Yet the evidential trail speaks for itself. Each initiative — the Christchurch Call, DIA takedown powers, filtering systems, the Safer Online Services review, the Media Reform consultation, and now the BSA’s assertion — forms a coherent sequence.
The same agencies recur; the same language recycles: safety, harm, trust, resilience. The metaphors change, but the objective remains. The Grail — complete control of online content under the banner of safety — glimmers ever closer.
If the Safer Online Services proposal had proceeded, even this modest Substack, with sufficient readership, might have qualified for regulation. The framework defined “regulated platforms” by audience size rather than by function. A platform with 100,000 annual visitors or 25,000 account-holders would fall within scope. By that measure, countless independent writers and commentators could have been drawn under bureaucratic supervision.
When that project was “deep-sixed” in 2024, it seemed the quest had ended. But by 2025, through the BSA’s revived claims and the new Media Reform agenda, the same ideas resurfaced. The proposals for content regulation, discreetly buried within Part 4 of the reform paper, were quite literally hiding in plain sight.
Given the number of coincidences — the alignment of timelines, the repetition of actors, the persistence of rhetoric — it is improbable that these events are disconnected. They are constituent parts of a larger design: a long, determined pursuit of the digital Grail.
And so the quest continues. The Castle of Corbenic, where the Grail is said to rest, seems suddenly closer than ever.




I can see where this is going.... you won't be able to criticize in any way, an 'indigenous' regime change.
Excellent and helpful history thank you Mr Hobbit. My worry (among many) is that “safe” is a rather vexed term. We want Governments to keep us safe (army, Police, Fire etc) but ideas & speech are way more subtle. Obnoxious concepts like ‘cultural safety’, now practiced throughout so many institutions (Govt & private) are good examples. Most of us would prefer to have sicko shit like child porn stopped in its tracks, but you just have to watch the TV news, where they warn you you are about to watch a possum being shot to see where this can go. Regulating content for political purposes is of course their ‘end goal’ and must be resisted at almost all costs… To borrow the legal dictum, it’s generally better for lots of shit to flood the internet than for one person to have their freedom curtailed…