Regulating Speech by Proxy
Will the Department of Internet Affairs (DIA) Safer Online Services proposals attract the attention of the New Zealand Bill of Rights Act
Update: The US Supreme Court will hear an appeal against the decision in Missouri v Biden
Introduction
This article looks at the issue of whether or not Internet platforms will have to apply the provisions of the New Zealand Bill of Rights Act 1990 (NZBORA) in decisions they may be required to make about content moderation in the event that the Department of Internal Affairs Safer Online Services proposals becomes a reality.
The article starts with an overview of the way in which Online platforms were seen as a proxy for Government censorship in the case of Missouri v Biden. Although this case is not directly applicable, as will be demonstrated it gives an illustration of when State interference with and coercion of private organisations means that those organisations become a proxy for or an arm or the State.
I proceed to outline briefly the relevant proposals under the DIA Safer Online Services Discussion Paper. For the purposes of this discussion I shall assume that those proposals will be adopted.
I then look at the way in which the provisions of NZBORA may apply to organisation by virtue of the application of section 3(b) NZBORA and conclude with a discussion of whether or not the proposals cross the threshold of section 3(b) and render Online Platforms amenable to the application and provisions of NZBORA.
Missouri v Biden
The case of Missouri v Biden No. 23-30445 8 September 2023 provides an insight into how interference with freedom of expression rights by private operators can become form of State interference with those rights.
For the last few years—at least since the 2020 presidential transition—a group of federal officials had been in regular contact with nearly every major American social-media company about the spread of “misinformation” on their platforms.
In their concern, those officials— hailing from the White House, the CDC, the FBI, and a few other agencies— urged the platforms to remove disfavored content and accounts from their sites. And, the platforms seemingly complied. They gave the officials access to an expedited reporting system, downgraded or removed flagged posts, and deplatformed users. The platforms also changed their internal policies to capture more flagged content and sent steady reports on their moderation activities to the officials.
The plaintiffs in Missouri v Biden were private citizens as well as the States of Missouri and Louisiana. The plaintiffs had material they had published removed or downgraded by platforms. The argument was that the Government co-erced, threatened and pressured social media platforms to censor them. Although the platforms stifled their speech it was argued that the government officials were pulling to the strings.
The officials argued that they only “sought to mitigate the hazards of online misinformation” by “calling attention to content” that violated the “platforms’ policies,” a form of permissible government speech.
The lower Court agreed with the plaintiffs and granted preliminary injunctive relief. The matter went on appeal to the 5th Circuit Court of Appeals who upheld the injunction in part.
What is of particular interest is the way in which the Court approached the activities of the various officials in obtaining the co-operation of the platforms.
The First Amendment protection of freedom of speech is not applicable to private actors but only to prevent Government interference with the freedom. But there may be occasions where Government interaction with a private organization to interfere with free speech rights may be attributable to the Government. In such a case the Court may intervene.
The facts in the case make it clear that there were numerous communications between Government agencies and social media platforms where the plaintiffs had posted material.
The White House and the office of the Surgeon-General communicated with the social media companies . There were requests to take down flagged content and later monitored platform’s moderation policies.
Officials also demanded details on Facebook’s internal policies at least twelve times, including to ask what was being done to curtail “dubious” or “sensational” content, what “interventions” were being taken, what “measurable impact” the platforms’ moderation policies had, “how much content [was] being demoted,” and what “misinformation” was not being downgraded.
From the beginning the platforms co-operated. The officials were often unsatisfied. They continued to press the platforms on the topic of misinformation throughout 2021, especially when they seemingly veered from the officials’ preferred course. When Facebook did not take a prominent pundit’s “popular post” down, a White House official asked “what good is” the reporting system, and signed off with “last time we did this dance, it ended in an insurrection.”
To ensure that problematic content was being taken down, the officials—via meetings and emails—pressed the platforms to change their moderation policies. The platforms apparently yielded. They not only continued to take down content the officials flagged, and provided requested data to the White House, but they also changed their moderation policies expressly in accordance with the officials’ wishes.
Even when the platforms did not expressly adopt changes, though, they removed flagged content that did not run afoul of their policies.
Still, White House officials felt the platforms were not doing enough. One told a platform that it “remain[ed] concerned” that the platform was encouraging vaccine hesitancy, which was a “concern that is shared at the highest (and I mean highest) levels of the [White House].” So, the official asked for the platform’s “road map to improvement” and said it would be “good to have from you all . . . a deeper dive on [misinformation] reduction.”
The officials’ frustrations reached a boiling point in July of 2021. That month, in a joint press conference with the Surgeon General’s office, the White House Press Secretary said that the White House “expect[s] more” from the platforms, including that they “consistently take action against misinformation” and “operate with greater transparency and accountability.” Specifically, the White House called on platforms to adopt “proposed changes,” including limiting the reach of “misinformation,” creating a “robust enforcement strategy,” taking “faster action” because they were taking “too long,” and amplifying “quality information.”
The Surgeon-General labeled social-media-based misinformation an “urgent public health threat[]” that was “literally costing . . . lives.” He asked social-media companies to “operate with greater transparency and accountability,” “monitor misinformation more closely,” and “consistently take action against misinformation super-spreaders on their platforms.”
The platforms responded with total compliance. They capitulated to the officials allegations, changed their internal policies, began taking down content and deplatforming users they had not previously targeted and continued to amplify or assist the officials’ activities, such as a vaccine “booster” campaign.
Accounts run by state officials were often subject to censorship, too. For example, one platform removed a post by the Louisiana Department of Justice—which depicted citizens testifying against public policies regarding COVID—for violating its “medical misinformation policy” by “spread[ing] medical misinformation.” In another instance, a platform took down a Louisiana state legislator’s post discussing COVID vaccines.
The actions by officials had a chilling effect on the approach by some commentators who had to be careful with the content they posted to avoid getting banned. This was a chilling effect upon their First Amendment rights and was a constitutionally sufficient injury.
It was noted that although the platforms had modified some of their policies regarding Covid they continued to enforce a robust misinformation policy. However the complaint was not with platform policies but with Government interference with the independent application of those policies.
The officials’ attorney conceded at oral argument that they continue to be in regular contact with social-media platforms concerning content-moderation issues today.
Was the Harm Traceable to the Actions of Government Officials
The argument for the Government was that the censorship of the Individual Plaintiffs was as a result of independent decisions of social media companies. There was no causal link.
The platform content moderation policies were not challenged. The issue was whether it could be traced to Government-coerced enforcement of those policies.
There was evidence that the censorship that was undertaken aligned with Government preferred viewpoints and the social-media platforms’ censorship decisions were likely attributable at least in part to the platforms’ reluctance to risk the adverse legal or regulatory consequences that could result from a refusal to adhere to the government’s directives.
The Court pointed to a distinction between censorship as a result of social-media platforms’ independent application of their content-moderation policies, on the one hand, and censorship as a result of social-media platforms’ government-coerced application of those policies, on the other. The focus of the plaintiffs was upon the latter proposition.
The Court then went on to consider the availability of injunctive relief.
The starting point was that under the First Amendmenr the government cannot abridge free speech. A private party, on the other hand, bears no such burden—it is “not ordinarily constrained by the First Amendment.”
That changes when a private party is coerced or significantly encouraged by the government to such a degree that its “choice” - which if made by the government would be unconstitutional - “must in law be deemed to be that of the State.” This is known as the “close nexus test”
Elements of the Close Nexus Test
1. What is the conduct of which plaintiff complains
2. Did the government sufficiently induce that act - not just any coaxing - the government is able to advocate and defend its policies. Persuasion on one hand - coercion or significant encouragement on the other
The Court discussed the nature of encouragement and emphasized that there must be a close nexus that renders the Government responsible for the decision of the private party.
The clear thread for “encouragement” in the US caselaw is that there must be some exercise of active (not passive), meaningful (impactful enough to render them responsible) control on the part of the government over the private party’s challenged decision.
Whether that is
(1) entanglement in a party’s independent decision-making or
(2) direct involvement in carrying out the decision itself,
the government must encourage the decision to such a degree that the Court can fairly say it was the state’s choice, not the private actor’s.
If the government compels a private party’s decision the result will be considered state action – that is clear coercion.
The Court referred to authority from the Second Circuit which starts with the premise that a government message is coercive—as opposed to persuasive—if it “can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official’s request.”
To distinguish such “attempts to coerce” from “attempts to convince,” courts will look at four factors, namely
(1) the speaker’s “word choice and tone”;
(2) “whether the speech was perceived as a threat”;
(3) “the existence of regulatory authority”; and, perhaps most importantly,
(4) whether the speech refers to adverse consequences.”
As for perception, it is not necessary that the recipient “admit that it bowed to government pressure,” nor is it even “necessary for the recipient to have complied with the official’s request”—“a credible threat may violate the First Amendment even if ‘the victim ignores it, and the threatener folds his tent.’”
The findings as far as the White House and the Surgeon General were concerned was that there was coercion - urgent uncompromising demands to moderate content.
Officials made express threats and, at the very least, rested upon the inherent authority of the President’s office. The officials made inflammatory accusations, such as saying that the platforms were “poison[ing]” the public, and “killing people.” The platforms were told they needed to take greater responsibility and action. Then, they followed their statements with threats of “fundamental reforms” like regulatory changes and increased enforcement actions that would ensure the platforms were “held accountable.”
The officials also significantly encouraged the platforms to moderate content by exercising active, meaningful control over those decisions. And, the officials’ campaign succeeded. The platforms, in capitulation to state-sponsored pressure, changed their moderation policies.
In all the circumstances, therefore, it was clear that the decisions to remove or moderate content flowed from the coercive pressure that was brought to bear by the Government in such circumstances that the decision to remove or moderate the content could be attributed to the State rather than to the policies of the platform.
It should be noted that the platforms involved had their own content moderation policies which they could enforce. These had been settled by the platforms themselves. They were terms of the contractual relationship that the platforms had with users or subscribers. They were settled by the platforms themselves without interference from the State.
I now will turn to the proposals of the Department of Internal Affairs made in its Safer Online Services Discussion Paper. An essential part of their proposals involves the settling of Codes of Compliance in which a Government department may be involved to a greater or lesser degree. And these Codes of Compliance will contain limits upon the nature of content that may be made available by social media platforms.
The Safer Online Services Proposals
In summary the Safer Online Services Proposals are these.
Online and other media platforms would be brought into one cohesive framework with consistent safety standards. DIA wants to make sure that platforms are “safe” for users, but it does not want to over-regulate them.
This will be done by creating codes of practice that set out specific “safety” obligations for larger or riskier platforms. These codes will be enforceable and approved by an independent regulator. The codes will cover things like how platforms should respond to complaints and what information they should provide to users.
Because the platforms are built upon the Internet their primary purpose is communicative. What is proposed is that the Codes of Practice will ensure that the communications environment is a “safe” one- whatever that means. In essence it is to ensure monitoring and control of content.
By a code, or code of practice, is meant a set of standards or requirements that platforms would have to meet to be responsible providers of access to digital and traditional media content.
The new independent regulator, would ostensibly be separate from the government, to promote safety on online and media platforms. This new regulator would work with platforms to create a “safer” environment and would require larger or high-risk platforms to comply with codes of practice.
The codes would set out the standards and processes platforms need to manage risks to consumer safety, such as protecting children and dealing with illegal material.
Platforms would have to have operating policies in place to meet these requirements but will have flexibility to decide how to achieve them. Industry groups will develop the codes with input from and approval by the regulator. This approach leaves editorial decision-making in the hands of platforms while ensuring users have greater transparency and protection.
The new regulator would make sure social media platforms follow codes to keep people “safe”. Media services like TV and radio broadcasters would also need to follow new codes tailored to their industry. The regulator would have the power to check information from platforms to make sure they follow the codes and could issue penalties for serious failures of compliance. This would ensure everyone is playing by the same rules and that consumer safety is prioritised.
The DIA acknowledges that there will probably be some deliberate non-compliance by smaller players, but we expect the biggest platforms to participate willingly – including the biggest social media companies.
The regulator would also have powers to require illegal material to be removed quickly from public availability in New Zealand. These powers exist already for objectionable material. The DIA is proposing that the regulator should also have powers to deal with content that is illegal for other reasons, such as harassment or threats to kill.
As far as the concept of “safety” is concerned, that word is not defined. Rather the DIA casts the purpose of the reforms in this way:
“As a result, New Zealanders will have a better online experience. Unintentional exposure to the most harmful content on online platforms should be far less common.
New Zealanders will be provided with more relevant information on risks, keeping them better informed about the content they choose to consume. It will be easier for New Zealand consumers to get help or make a complaint, when this becomes necessary.”
“The objective of this review of New Zealand’s regulatory system for media and online platforms (the Review) is to enhance protection for New Zealanders by reducing their exposure to harmful content, regardless of delivery method. The aim is to provide better protection for vulnerable groups and achieve better consumer protection for all New Zealanders.
To accomplish this objective, the system must incorporate “safety” measures into platforms’ management systems and processes. Transparency and proportionality are critical, and decisions must align with New Zealand’s democratic values and human rights.
The government’s role should be limited to dealing with illegal material, and a regulator will take a more proactive approach to consumer protection.
As is so often the case, the devil in these proposals lies in the detail.
As far as the Codes are concerned it is anticipated that industry groups will develop their own Codes. That is all well and good. However, the Codes must be approved by the Regulator. So the final say rests with a Government appointed official despite the qualification of independence.
However, if a platform or group of platforms cannot settle upon a Code the regulator will draw one up. This clearly puts control of the Code not in the hands of the platform but in the hands of the regulator.
Compliance and penalties – Government Involvement
Compliance and the imposition of penalties will be in the hands of the Regulator.
The Safer Online Services Discussion Document is contradictory on this point. At Page 22 Para 26 it states:
“The regulator would not have any powers over individual content creators who use platforms to share legal content and would not be involved in moderating individual pieces of legal content.”
However, at Page 30 para 36 its states:
“In the new framework, authors, creators, and publishers of content would need to comply with the requirements of the platforms they use – but are not directly subject to the regulator (unless the publisher is also a platform).
Failure to comply with the requirements could lead to authors, creators, and publishers being suspended, removed, or prevented from accessing the platforms’ services. They may also be blacklisted if they show repeated harmful behaviour.”
I have broken the paragraph into two segments. It would be splitting hairs to suggest that the word directly (which I have highlighted) means that the statement in para 26 is still valid. Clearly non-compliance with the requirements of a platform (and the Code to which it is subject) would result in a consequence which would be imposed by the Regulator.
No procedural safeguards are put in place. There is no detail of the process that would be employed to establish non-compliance nor to determine, in the event of proof of non-compliance of the imposition of an appropriate penalty.
Interference with a right of freedom of expression should not be at the whim of a regulator but should be the subject of a proper judicial process.
Another difficulty arises with the use in para 26 of the term “legal content”. If content is legal it cannot be the subject of a sanction as proposed in para 36.
What the assertion in para 36 really means is that content could become other than “legal” (that is “unlawful”) simply because it did not comply with the terms and conditions of the platform. Those terms and conditions presumably would include the applicable Code although that is not made clear.
Thus the Regulator, in addition to “soft” lawmaking powers inherent in the creation and approval of Codes would also have the power to declare that which was “legal” to be “other than legal” (or unlawful).
It could be argued that the Regulator, like the Chief Censor under the Films, Videos and Publications Classification Act 1993, will be independent and will not be subject to persuasion or influence by the Government of the day. Yet there is an example of an absence of independence on the part of Chief Censor at the time with the release of the “Edge of the Infodemic – Challenging Misinformation in Aotearoa ” paper which was clearly supportive of the Information Wars being waged around the Government’s COVID-19 information strategy.
Whilst it is unlikely that Government Officials in New Zealand would be as blatant in their directive, coercive and persuasive activities as they were in Missouri v Biden there can be no doubt that the content, application and enforcement of the Codes would be at the direction or behest of the Regulator. As a member of a (proposed) statutorily created entity the Regulator would be a person performing a public function power, or duty conferred or imposed on that person by or pursuant to law under section 3 of the New Zealand Bill of Rights Act 1990.
This means that, like the Chief Censor under the FVPCA, the Regulator would have to consider and apply the provisions of the New Zealand Bill of Rights Act 1990.
Similarly I would suggest that the various platforms subject to the oversight of the regulator would be bodies performing a public function imposed on them under section 3 as well. In such a case, they too would have to consider and apply the provisions of the NZBORA as well.
In the next section I shall explain why it is that the platforms would fall within section 3 NZBORA. Although the discussion differs from the approach in Missouri v Biden the consequence ultimately is the same and that is the applicability of the First Amendment in Missouri and whether it was breached and the applicability of the NZBORA and its role in whether or not there can be interference with content that is present on platforms.
Applicability of New Zealand Bill of Rights Act
The NZBORA applies only to acts that are done by the legislative, executive or judicial branches of the New Zealand Government. Its application may extend, as I have suggested above, to a person or body in the performance of a public function conferred on that person or body by or pursuant to law.
Section 3 - Recent Application
Legislative, executive or judicial acts are easily identified. The difficulty arises when one of those branches is not readily identifiable. What are the circumstances necessary to determine whether the threshold of performing a public function by or pursuant to law.
The recent Supreme Court decision in Moncrief-Spittle v Regional Facilities Auckland Limited [2022] NZSC 138; [2022] 1 NZLR 459 is of assistance.
The facts in the case were these.
Regional Facilities Auckland Ltd (RFAL) was a “Council-Controlled Organisation” (CCO) owned by Auckland Council. One of its main functions was to operate venues owned by Auckland Council’s predecessor authorities. Axiomatic Media Pty Ltd, an event promoter based in Australia, contacted RFAL about hiring a venue for a speaker presentation. Axiomatic gave RFAL the names of the planned speakers (Lauren Southern and Stephen Molyneux) but did not tell RFAL that they had attracted controversy on tour in Australia. Nor did Axiomatic tell RFAL of the precautions it had taken to prevent disruption of its event which included not revealing the venue until the day before the event.
RFAL sent Axiomatic a standard-form venue hire agreement. This agreement required Axiomatic to prepare a health and safety plan and gave RFAL the right to cancel the agreement if there were risk of danger or injury to any person or of damage to any property. RFAL received complaints about the proposed event and a protest group issued a press statement announcing that it would blockade the event.
RFAL obtained further information and decided to cancel the event.
The plaintiffs, Mr Moncrief-Spittle and Dr David Cumin who had purchased tickets to the event brought judicial review proceedings against RFAL.
The High Court found that the matter was a contractual one involving a commercial property owner and dismissed the application for judicial review. The High Court held that RFAL had not been exercising a public power and the decision was not susceptible to judicial review or challenge under the New Zealand Bill of Rights Act 1990.
There was an appeal to the Court of Appeal which disagreed with the High Court and held that the decision of RFAL was amenable to judicial review and that NZBORA applied. However the Court of Appeal was of the view that the decision to cancel was reasonable.
Mr Moncrief-Spittle and Dr Cumin appealed to the Supreme Court. On the issue of whether or not RFAL fell within the scope of section 3 NZBORA the Supreme Court held that RFAL effectively stood in the shoes of Auckland Council in providing a service intended for the social well-being of the community Thus there was a governmental aspect to its functions.
The venue was publicly owned property available for public hire for expressive activities. RFAL was a public body, has an important role in providing facilities for expressive activities, had some public funding, did not exist for private profit and was subject to governance by Auckland Council.
The relevant functions of the Council had effectively been devolved to RFAL so that when cancelling the contract with Axiomatic, it was exercising functions which otherwise would have been Council functions. The statutory scheme provided something of a “governmental” flavour to RFAL’s decisions.
The fact the decision was governed and effected by contractual arrangements was not determinative.
In the course of its decision the Court considered the approach to determining the applicability of section 3(b) NZBORA which had been set out by Randerson J in Ransfield v Radio Network Ltd [2005] 1 NZLR 233. Randerson J’s analysis is what is described as the Ransfield approach or the Ransfield test and I shall now turn to that case
The Ransfield Test
The background in Ransfield was this
The plaintiffs, Areta Ransfield and Patricia Tui McLeod, alleged that on or about 7 February 2002 they were banned from participation in talkback radio rogrammes operated by the first defendants, The Radio Network Ltd, Canwest NZ Radio Holdings Ltd and Uma Broadcasting Ltd. They alleged a breach of their rights under NZBORA and particularly a breach of their right to freedom of expression under section 14 of that Act. They sought damages and injunctive relief.
The broadcasters responded with an application to strike out the proceedings on the grounds that no tenable causes of action pleaded.
The principle issue was whether or not the provision of NZBORA applied to the broadcasters and whether they were performing a public function, power, or duty conferred or imposed by law within the meaning of s 3(b) of the NZBORA.
It was argued by Mr Ransfield that the state had control over the radio stations in two principal ways. The first was in relation to the issue of a licence and the second in controlling the registration of the companies under the Companies Act 1993. In accordance with what Mr Ransfield submitted were the principles of agency, he submitted that if the state could not ban people from taking part in talkback radio, then the radio stations, as the agents of the state, could not do so either.
The Court considered the regulation of commercial radio stations in New Zealand and the scope of a licence granted under the Radio Communications Act 1989. It was observed that control of programme standards was exercised through the Broadcasting Act and the responsibilities of a broadcaster to maintain programme standards under that Act were considered.
The first issue that the Court considered was whether there was a statutory duty requiring radio stations to permit the plaintiffs access to talkback radio. The answer was no. There is no statutory duty requiring radio stations to permit plaintiffs access to talkback radio. Nor is there any corresponding statutory duty not to ban them from taking part. There is nothing in the terms or conditions of the spectrum licences issued to the radio stations which requires them to provide programmes of any particular type or which would prevent a ban on access to talkback radio.
The second issue - and the important one for the purposes of this discussion was whether the radio stations were performing a public function, power, or duty within the meaning of s 3(b) of the NZBORA - in short, were the provisions of NZBORA engaged.
If NZBORA applied to the radio stations then prima facie, the plaintiffs’ right to freedom of expression was abridged or diminished by their banning from talkback radio.
They were still free to express their views, but not via the medium of talkback radio. Their audience is thereby restricted.
But before reaching that point, and whether the right could be subject to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society under section 5 NZBORA, it had to be established that NZBORA applied to broadcasters in the first place.
The Court observed that there was little doubt that the broadcasters were exercising function conferred by law in broadcasting programmes. They had to have licences. Without licences they would be broadcasting unlawfully. Furthermore there was a duty to conform to programme standards under the Broadcasting Act.
The question then became whether or not the broadcasters were exercising a public function or power. Depending upon the context a body could be exercising a private or a public power. One factor was that a function must be “governmental” in nature to fall within section 3(b) NZBORA as a public function, power or duty.
Randerson J considered the authorities and formulated the following criteria:
(a) The fact that the entity in question is performing a function which benefits the public is not determinative. If it were, anyone delivering goods or services to the public under licence or other authority conferred by law, would fall within the section. That could not have been intended.
(b) Whether the function, power, or duty is carried out in public is immaterial. A public function, power, or duty under s 3(b) may be performed in private.
(c) Whether the entity is amenable to judicial review is not necessarily decisive and some care needs to be taken in applying decisions from that context for the reasons I have set out.
(d) The primary focus of inquiry under s 3(b) is on the function, power, or duty rather than on the nature of the entity at issue. Nevertheless, the nature of the entity may be a relevant factor in determining whether the function, power, or duty being exercised is a public one for the purposes of s 3(b).
(e) A person or body may have a number of functions, powers, or duties, some of which may be public and some private. It is essential to focus on the particular function, power, or duty at issue.
(f) Given the many and varied mechanisms modern governments utilise to carry out their diverse functions, no single test of universal application can be adopted to determine what is a public function, duty, or power under s 3(b). In a broad sense, the issue is how closely the particular function, power, or duty is connected to or identified with the exercise of the powers and responsibilities of the state. Is it “governmental” in nature or is it essentially of a private character?
(g) Non-exclusive indicia may include:
(i) whether the entity concerned is publicly owned or is privately owned and exists for private profit;
(ii) whether the source of the function, power, or duty is statutory;
(iii) the extent and nature of any governmental control of the entity (the consideration of which will ordinarily involve the careful examination of a statutory scheme);
(iv) whether and to what extent the entity is publicly funded in respect of the function in question;
(v) whether the entity is effectively standing in the shoes of the government in exercising the function, power, or duty;
(vi) whether the function, power, or duty is being exercised in the broader public interest as distinct from merely being of benefit to the public;
(vii) whether coercive powers analogous to those of the state are conferred;
(viii) whether the entity is exercising functions, powers, or duties which affect the rights, powers, privileges, immunities, duties, or liabilities of any person (drawing by analogy on part of the definition of statutory power under s 3 of the Judicature Amendment Act 1972);
(ix) whether the entity is exercising extensive or monopolistic powers; and
(x) whether the entity is democratically accountable through the ballot box or in other ways.
In the particular case Randerson J concluded that the broadcasters were performing a private function. He drew a distinction between a state broadcaster like Radio NZ on one hand and private commercial broadcasters on the other. The State funded RNZ but does not fund private radio nor does it supervise the content of talkback programmes broadcast by private radio. The provision of talkback radio could not be described as a governmental function. In the radio industry governmental functions were pursed through public radio.
Thus he held that NZBORA did not apply.
It is important to note that although the Supreme Court in Moncrief-Spittle approved the Ransfield test it observed that the indicia in Ransfield should not be treated as the sole determinant, as that case itself made clear. The Supreme Court noted that it will always be necessary to step back and ask whether, overall, s 3(b) is engaged. The emphasis must be on thye nature of the function rather than the body performing it.
In that regard the Supreme Court illustrated the point by reference to the Court of Appeal decision in Low Volume Vehicle Technical Assoc Inc v Brett [2019] NZCA 67; [2019] 2 NZLR 808. In that case the Court of Appeal drew from Ransfield that in a broad sense the issue is how closely the particular function, power or duty is connected to or identifies with the exercise of the powers and responsibilities of the State.
Differing Approaches - Missouri v Biden and Ransfield v Radio Network Ltd
It is frequently observed in New Zealand Courts that little assistance can be obtained from United States decisions in the application of the rights guaranteed under the US Constitution. Certainly in Moncrief-Spittle that observation was made in dismissing the applicability of an argument about the availability of the “heckler’s veto”.
What is of assistance is the way in which the different jurisdictions attributed governmental actions to private organisations which thereby engage the applicability of the First Amendment in the US or NZBORA in New Zealand.
In Missouri the issue was about the level of Government interference in the activities of the private social media platforms. The censorship that was imposed by the social media platforms was attributable at least in part to the platforms’ reluctance to risk adverse legal or regulatory consequences that might result from a refusal to adhere to the Government’s directives. The federal officials ran afoul of the First Amendment by coercing and significantly encouraging social-media platforms to censor disfavored speech including by threats of adverse government action like antitrust enforcement and legal reforms. In such a situation the actions of the private party in law are deemed to be that of the State - what is known as the close nexus test.
This is quite different from the test that appears in section 3 NZBORA. That section sets out two avenues whereby the NZBORA will be engaged. The Act applies to the legislative, executive and judicial branches. That is clear and applies to direct governmental activity. It then applies to any body in the performance of any public function, power or duty that is conferred or imposed on that body by or pursuant to law.
Although it is tempting suggest that the use of the word “imposed” could be applicable to threats, coercion or undue influence by governmental agencies, the word “imposed” relates to the function duty or power and the fact that the function duty or power is applied by law. The word imposed means that the source of the function duty or power must be the law.
However, in Moncrief-Spittle the Supreme Court looked at the fact that RFAL “stood in the shoes” of the Council, that the premises that it supervised had a public character to them, that it had some public funding, did not exist for private profit and was subject to governance by Auckland Council and was exercising powers that would otherwise have been exercised by the Council.
One factor that was not raised in any of the hearings in Moncrief-Spittle was that the Mayor of Auckland, Mr Phil Goff, sent a clear message that Lauren Southern and Stephen Molyneux were not welcome at Council venues. He said:
"I just think we've got no obligation at all - in a city that's multicultural, inclusive, embraces people of all faiths and ethnicities - to provide a venue for hate speech by people that want to abuse and insult others, either their faith or their ethnicity,"
Whether or not Mr Goff’s strong pronouncements influenced the decision of RFAL is difficult to determine. As an editorial in the New Zealand Herald noted:
Mr Goff insisted on Friday that the decision to bar them from using any council-owned venue had been made by Regional Facilities Auckland, but that takes some believing.
Since when did council minions start making decisions regarding who is politically acceptable and who isn't? The organisers claimed on Friday that the facilities managers had not raised any concerns when the booking was made….
…it was Goff who announced the decision (by Twitter, very Donald Trumpish), not the faceless Regional Facilities Auckland. And whoever made it, he was more than comfortable with it. Auckland, he said, was a multicultural, inclusive, tolerant city, and Southern and Molyneux were not welcome.
Although RFAL was found to “stand in the shoes” of the Council, one wonders what weight Courts would have placed on the pronouncements of the Mayor in a case where the freedom of expression rights under NZBORA were engaged, and whether or not the subsequent decision by RFAL to cancel the event was truly reasonably justified or bowing to pressure by a powerful political figure.
Although it is doubtful that the approach in Missouri v Biden attributing Government action to private bodies by way of the close nexus test could apply in New Zealand, certainly, in my view, the exercise of political power by way of public statement could well have relevance to a consideration of whether or not there were reasonable grounds to limit the exercise of freedom of expression.
The matter now turns to whether or not the provisions of NZBORA are applicable to the DIA Safer Online Services proposals.
NZBORA and the Safer Online Services Proposals
For the purposes of this discussion I am assuming that the proposals under the Safer Online Services Discussion Paper are passed into law.
Clearly the Independent Regulator would be subject to the provisions of NZBORA. I say that because clearly the Regulator would be exercising a public function conferred by law in much the same way as the Chief Censor and the Classification Office under the Films, Videos and Publications Classification Act are subject to NZBORA. The Independent Regulator is a substitute for the Chief Censor under the Safer Online Services proposals. There is ample authority to support the proposition that the Chief Censor and therefore the Independent Regulator are subject to the provisions of NZBORA.
This would mean that in a consideration of the provisions of the Codes of Conduct, the Regulator would have to be careful to ensure that the provisions of NZBORA received proper attention. That is clear from the decisions in the case of Moonen v Film and Literature Board of Review [2000] 2 NZLR 9 (CA).
In Moonen the Court of Appeal proposed the following approach which it thought could be helpful, but acknowledged other approaches could also be used and could lead to the same result.
Once the scope of the relevant right or freedom has been determined a court should follow four steps.
First, it should identify the different interpretations of the words of the statute. If only one meaning is open it must be adopted.
Second, where there is more than one possible interpretation it must identify that meaning which least infringes the right, as it is this meaning that s 6 aided by s 5 requires a court to adopt.
Third, it must then identify the extent to which that meaning limits freedom of expression. In doing so any court must give careful consideration to the extent to which that limitation can be demonstrably justified in a free and democratic society in terms of s5.
The final step arises after a court has made the necessary determination under s5. The court must indicate whether the limitation is justified. If the limitation is not justified there is an inconsistency with s5 and the court must declare this to be so.
Given that Internet based platforms are communications platforms it is clear that the freedom of expression would be engaged and the Moonen analysis would have to be undertaken in the development of the Codes. This would include ensuring that if there are to be any interferences with the freedom of expression such interferences should result in the least infringement of the right and the Codes should reflect that decision. Clearly something more precise than “safety” would have to be formulated.
But there is another right which may be engaged and it is the freedom of association guaranteed under section 17 of NZBORA. This is by no means a clear issue. The Supreme Court in Moncrief-Spittle suggested that the freedom of association was not engaged in that the right dealt with formalised relationships such as clubs, unions or other forms of common (lawful) purpose gatherings of people. The right was directed towards the right to form or participate in an organisation, to act collectively, rather than simply to associate as individuals.
That is very much an interpretation that relies upon a traditional kinetic paradigm approach to the concept of association and ignores the fact that association within the Digital Paradigm involves a lot more than a weekly meeting at a club room for a common purpose. It is more about relationships and the way that people group in online communities be it by joining Facebook or LinkedIn Groups. In the Digital Paradigm the contacts and associations that one enjoys can be as strong as those formed in the kinetic paradigm. Individuals can belong to a group, act collectively and participate in a group just as meaningfully as they might in a more formalised kinetic structure. My view is that this right under NZBORA may need to be reassessed.
The next question is whether or not the platforms themselves would have to apply the provisions of NZBORA.
The starting point is that these are private organisations. The Safer Online Services proposal is that these platforms will be subject to State (Regulator) approved Codes of Conduct.
There is little difference between this proposal and the way in which broadcasters were subject to programme standards under the Broadcasting Act in Ransfield.
It could not be argued that the Internet platforms were “standing in the shoes” of any Government agency as was the case with RFAL in Moncrief-Spittle.
The fact that they were providing services for the public would not be determinative. (Ransfield para [69] (a))
It is not material that their functions are carried out in private or in public. (Ransfield para [69] (b))
It is unlikely that under the present proposals the Internet platforms would be amenable to judicial review. (Ransfield para [69] (c))
What is the nature of the function or power being exercised? The nature of the entity may be taken into account but primarily the function of the platform is to provide communication services for subscribers. In this respect the function or power differs from that of the broadcasters in Ransfield where the opportunity to participate in talk-back was not available to the world. Although Internet platforms may have moderative powers by and large they are open to all. In this respect the function of the service provided is more weighted towards a public rather than a private one. (Ransfield para [69] (d) and (e))
How closely is the particular function, power, or duty connected to or identified with the exercise of the powers and responsibilities of the state. Is it “governmental” in nature or is it essentially of a private character? (Ransfield para [69] (f))
The factors that may be taken into account are:
(i) whether the entity concerned is publicly owned or is privately owned and exists for private profit;
Privately owned and exists for private profit.
(ii) whether the source of the function, power, or duty is statutory
Not statutory
(iii) the extent and nature of any governmental control of the entity (the consideration of which will ordinarily involve the careful examination of a statutory scheme)
At the present time as proposed in the Safer Online Services materials there is a level of governmental control both in the imposition of Codes of Conduct, the fact that a Regulator may prescribe Code of Conduct and that the Regulator has enforcement powers where there is non-compliance with the Code. These enforcement powers may apply to the platforms and may require platforms to block or cancel content or the accounts of individual subscribers. Thus there is a level of governmental control which it its least invasive could be termed supervisory and at its most invasive could be termed directive
(iv) whether and to what extent the entity is publicly funded in respect of the function in question;
Internet platforms are not publicly funded
(v) whether the entity is effectively standing in the shoes of the government in exercising the function, power, or duty;
As I have earlier suggested this is not the case under the proposals as they stand. However, should the State or the Regulator engage in conduct akin to that in Missouri v Biden it is arguable that the platform by complying would become a proxy for Government action
(vi) whether the function, power, or duty is being exercised in the broader public interest as distinct from merely being of benefit to the public;
The Codes and their rationale is a public interest issue (A “safer” Online environment) rather than being simply beneficial.
(vii) whether coercive powers analogous to those of the state are conferred;
Clearly the answer is yes in terms of Code enforcement and should the State apply pressure in the manner demonstrated in Missouri v Biden the use of that pressure to influence platform decision-making would indicate a level of State control
(viii) whether the entity is exercising functions, powers, or duties which affect the rights, powers, privileges, immunities, duties, or liabilities of any person (drawing by analogy on part of the definition of statutory power under s 3 of the Judicature Amendment Act 1972);
The answer to this proposition is yes given that the State dictated or approved Codes will have an impact upon the freedom of expression and association rights of platform subscribers
(ix) whether the entity is exercising extensive or monopolistic powers;
Some of the Internet platforms number subscribers in the billions and therefore have a significant “market weight”. However, because of the principle of “permissionless innovation” any person can bolt a platform onto the Internet
(x) whether the entity is democratically accountable through the ballot box or in other ways
The answer to this proposition must be no.
As both Ransfield and Moncrief-Spittle made clear the list of factors above are neither exclusive nor determinative. As far as the Safer Online Services proposals are concerned it must be remembered that they are directed to the control not so much of platforms as content. It must also be remembered that the Internet is fundamentally a communications system and that social media platforms enable and facilitate communication. The reality is that for the first time in human history everyone potentially has a voice. The Internet is probably the greatest force for the democratisation of communication ever devised - probably an unintended consequence but a consequence nevertheless. Given that Safer Online Proposals potentially interfere with communication and from that with the freedom of expression and association a careful examination of the applicability of NZBORA to decisions which have the effect of monitoring, moderating or interfering with those freedoms is necessary and critical.
It is difficult at this stage to predict with any certainty whether or not the proposals will mean that platforms may have to consider the provisions of NZBORA in enforcing their Codes. However, the factors which lean towards an affirmative view are as follows:
1. The extent and nature of governmental control as presently proposed which in my view is extensive
2. Depending upon the level of interference with Platform independence in decision-making, should the State engage in the type of interference demonstrated in Missouri v Biden it could be argued that the platform was standing in the shoes of the State if it interfered with the content or accessibility of subscribers
3. What is proposed is in the public interest and consequently carries with it a higher level of State interest
4. There is provision for coercive powers for non-compliance
5. The Codes, dictated or approved by the State in the form of the Regulator have an impact upon the rights of freedom of association and freedom of expression.
Although determining the applicability of section 3(b) NZBORA is not a “box-ticking” exercise there is certainly a sufficient basis for advancing the proposition that the platforms will be amendable to section 3(b) NZBORA, especially given the communicative nature and democratic importance of the Internet.
Conclusion
This article demonstrates how different “rights based” jurisdiction approach the issue of the occasions when State interference with private or apparently private organisations may be amendable to the application of rights based rules.
In Missouri v Biden State interference by some agencies was sufficient to justify injunctive relief. It is unclear whether this level of relief would be available in the event of State interference or non-compliance with NZBORA in its dealings under the Safer Online Services proposals.
But in the final analysis this article demonstrates that the State and in particular the DIA should proceed with a high level of care in its consideration of the Safer Online Services proposals. Ideally the project should be abandoned. But if it is not the development of the project should be subject to careful scrutiny, and whatever form finally eventuates - if it eventuates - it will be subject to the oversight of the Courts in the application of the principles of the NZBORA.
In closing I should observe that the proposals are an example of Internet Exceptionalism, where conduct on the Internet is treated differently from conduct in the “kinetic” environment. A current example of Internet Exceptionalism may be found in the Harmful Digital Communications Act 2015 where remedies for online speech are available which would not be similarly available in the “kinetic” environment.
In terms of consistency and certainty in the law such a discriminatory approach should be avoided.
Given that the "Independent Regulator" is likely to be someone like Kate Hannah of the Disinformation Project or her ilk, if implemented, this will be a farce. The very slipperiness of the terms harmful and safe in the context of the internet means it would be open season for censorship! One more thing for me to lose sleep over.
I'm trying very hard to be mindful and live in the present, which means being very choosey about who I follow and what I read, but I've come to the conclusion that if I'm to be true to myself and my principles I have to engage with a lot of issues I'd rather put my head in the sand about because at 75 I really thought I might be able to look forward to a serene and comfortable old age. The last six years have well and truly blown that hope out of the water.
So once again David, thanks for an edifying exposition on what we have to look forward to, as I don't imagine a coming government will think any differently from the last after three years of insidious propaganda by TDP et al.