Censorship - Plus ca Change
Part 2 - Expanding the Scope of Censorship - The Ghost of Sir Roger L'Estrange
Introduction
This is the second part of my two part series considering censorship. In this part I look towards the future in light of the Department of Internal Affairs proposals for what are described as “Safer Online Services and Media Platforms”. I identify two areas where I forsee real problems with these proposals as they stand. My overall view is that there is little justification for the proposals and they systems in place are largely satisfactory, although as will be suggested in a future article, the Harmful Digital Communications Act could be reviewed and strengthened
The Safer Online Services and Media Platforms discussion document from the Department of Internal Affairs (DIA) proposes a fully centralised content control system – another term for censorship. This is in keeping with the Labour Government’s desire for centralisation exemplified by the (now abandoned) merger of TVNZ and Radio NZ, the so-called Three Waters proposals and the centralisation of the Health system.
What the DIA proposes is to collapse many of the powers of the Censors office, the Broadcasting Standards Authority(BSA) and the New Zealand Media Council (NZMC)into one system with a single regulator. The various platforms for content distribution – social media, broadcasting, gaming and others – will settle upon standards or Codes of Practice relating to the way that they operate and more importantly the content that they deliver.
If the representative bodies cannot agree upon a Code of Practice, the regulator will impose standards upon them. Thus absence of the provision of a set of standards will result in a diktat from a centralised authority.
The regulator would be able to force their members to comply with their codes, and if a complainant was not satisfied with the outcome they received from, say, a Media Council complaint, they would be able to “appeal to a quasi-judicial body associated with or approved by the regulator”.[1] One wonders if this will be akin to a body being a judge in its own cause.
The language of the Discussion Paper uses the words “safety” and “harm” interchangeably. It must give cause for wonder precisely what the objective actually is.
Certainly content control is an objective if not the main objective. One of the papers providing background information is entitled “Content Regulatory Review”
The DIA suggests the model proposed is one designed to regulate platforms. That is naïve and disingenuous. Let us not forget that content control is an anodyne word for censorship.
Leading in to the DIA proposals is the Report of the Cabinet Social Wellbeing Committee dated 21 May 2021. The following specific types of harmful media content affecting New Zealanders were listed as:
1. adult content that children can access, for example online pornography, explicit language, violent and sexually explicit content;
2. violent extremist content, including material showing or promoting terrorism;
3. child sexual exploitation material
4. disclosure of personal information that threatens someone’s privacy, promotion of self harm
5. mis/disinformation
6. unwanted digital communication
7. racism and other discriminatory content
8. hate speech
The clear implication is that these categories of communication will be the targets of Codes of Practice.
It is difficult to understand the necessity for this. Items 1 – 3 already fall within the “objectionable” ambit of the Films Videos and Publications Classification Act. Item 4 falls within the scope of the Harmful Digital Communications Act 2015. Item 6 falls within the scope of the Unsolicited Electronic Messages Act.
Item 5 is probably content of which the Government disapproves. Much of what passes for misinformation or disinformation is opinion – misguided maybe but in a free and democratic society citizens hold differing, wide-ranging and often conflicting opinions. That one politician seemed to be of the view that misinformation was being weaponised (and therefore should be dealt with) is her opinion. To try to include mis/disinformation within the ambit of harmful media content would be a significant intrusion upon the freedom of thought and the freedom of expression.
Item 7 is difficult. It is already covered to some degree by the Human Rights Act. But that only comes into play if the speech is likely to incite hostility against or bring into contempt any group of persons in or who may be coming to New Zealand on the ground of the colour, race, or ethnic or national origins of that group of persons. Speech that may not go that far is still called out for being “racist” which it may or may not be. But to broadly categorise racist content as harmful may well be an unnecessary and unwarranted intrusion upon the freedom of expression.
Hate speech in my view has never been properly defined. I prefer the term “dangerous speech” and it is speech that incites physical violence or harm to a person or group of people based on their defining characteristics.
The proposed reforms of the law dealing with “hate speech” were part of the new Prime Minister’s policy bonfire. It seems to me that the Safer Online Services Discussion Paper is a back door means by which this debate may be reopened.
“Harm” and “Safety” as Censorship Measures
The concept of “safe” or “safety” has difficulties in the field of information communication. Keeping people safe involves the reduction of the risk of harm.
The definition of harmful is problematic. I include it here and then comment upon it.
Content is considered harmful where the experience of content causes loss or damage to rights, property, or physical, social, emotional, and mental wellbeing.
As things stand the proposition seems to be that if content fulfils this definition it should be censored or access to it restricted.
Let me unpack the definition and discuss it. My argument is that content that may be harmful should not qualify for a regulator’s interference. The argument, as will become apparent, has a different nuance where the content has actually caused harm. The censorship on the basis of “may be harmful” is prospective. The content that has been proven to be harmful is retrospective.
The use of the words “experience” of content is highly subjective and it is doubtful that it is in fact needed.
The element of content being causative of loss or damage to rights, property or physical, social emotional and mental wellbeing introduces some difficulties.
From the outset I acknowledge that content can have an effect upon emotional and mental wellbeing. In the Harmful Digital Communications Act 2015 (HDCA) harm is defined as serious emotional distress. I discuss this below.
I find it difficult that content in and of itself can be causative of loss or damage to rights, property or physical wellbeing. It may prompt action that results in loss or damage but in and of itself information is passive.
The example of loss of money arising from a fraudulent scam which originates from false or misleading information comes to mind. However the discussion document makes it clear that scams are not a target of regulation.
The definition in the HDCA clearly anticipates a particular actual consequence has occurred. In that respect it is retrospective
In that Act remedies are available where a digital communication causes harm. There can be no doubt that the Act applies to platforms. They are involved in digital communications.
As I have said, harm is defined as “serious emotional distress”. It should be noted that it is not an offence nor actionable in the kinetic environment to say or write something that causes serious emotional distress. In that respect the HDCA is an example of “internet exceptionalism”.
There are various tests or yardsticks present in the HDCA which assist in assessing whether harm (as defined) has been suffered. For example in section 22 which creates the offence of causing harm by posting a digital communication three elements must be proven
a) A person must post a digital communication with the intention of causing harm
b) Posting the communication would cause harm to an ordinary reasonable person in the position of the victim
c) Posting the communication caused harm to the victim
From this it is clear that there is a mixed objective and subjective test. The likelihood of serious emotional distress is measured against whether the communication would cause serious emotional distress to an ordinary reasonable person [the objective element] in the position of the victim [the subjective element]
In assessing whether a post would cause harm a court may take into account a number of factors listed in section 22(2) which are non-exclusive. These factors are:
(a) the extremity of the language used:
(b) the age and characteristics of the victim:
(c) whether the digital communication was anonymous:
(d) whether the digital communication was repeated:
(e) the extent of circulation of the digital communication:
(f) whether the digital communication is true or false:
(g) the context in which the digital communication appeared.
The HDCA also provides a framework for remedial action in the case of electronic communications that do not meet the threshold to bring the communication within the scope of section 22.
To qualify for remedial orders which are set out in section 19 HDCA and which include take down of the material there must be harm cause and a breach of one or more communications principles that are set out in section 6 of the Act.
These principles are:
1 A digital communication should not disclose sensitive personal facts about an individual.
2 A digital communication should not be threatening, intimidating, or menacing.
3 A digital communication should not be grossly offensive to a reasonable person in the position of the affected individual.
4 A digital communication should not be indecent or obscene.
5 A digital communication should not be used to harass an individual.
6 A digital communication should not make a false allegation.
7 A digital communication should not contain a matter that is published in breach of confidence.
8 A digital communication should not incite or encourage anyone to send a message to an individual for the purpose of causing harm to the individual.
9 A digital communication should not incite or encourage an individual to commit suicide.
10 A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability.
In deciding whether or not to make a remedial order section 19(5) requires the Court to take into account the following:
(a) the content of the communication and the level of harm caused or likely to be caused by it:
(b) the purpose of the communicator, in particular whether the communication was intended to cause harm:
(c) the occasion, context, and subject matter of the communication:
(d) the extent to which the communication has spread beyond the original parties to the communication:
(e) the age and vulnerability of the affected individual:
(f) the truth or falsity of the statement:
(g) whether the communication is in the public interest:
(h) the conduct of the defendant, including any attempt by the defendant to minimise the harm caused:
(i) the conduct of the affected individual or complainant:
(j) the technical and operational practicalities, and the costs, of an order:
(k) the appropriate individual or other person who should be subject to the order.
The HDCA is a piece of legislation that addresses and interferes with the freedom of expression. Section 6(2)(b) HDCA requires a Court to act consistently with the rights and freedoms contained in the New Zealand Bill of Rights Act 1990. That means that any interference with freedom of expression must be subject to the justified limitation test contained in section 5 NZBORA.
The second thing is that all of the tests, restrictions, limitations and definitions that are in the HDCA have been the subject of legislative scrutiny. Indeed the Act derived from a Ministerial Briefing Paper authored by Professor John Burrowes and Ms Cate Brett of the Law Commission and upon which I consulted. Although the Communications Principles may have the flavour of a Code, they have all been the subject of legislative examination and scrutiny. They are the subject of an Act of Parliament and not as a result of a delegated or “soft” rule-making power.
The final point is that the harm that is the subject of the Act is retrospective – that is the harm must have been suffered before the provisions of the Act have been engaged. This is consistent with the law addressing acts that have a consequence rather than adopting an anticipatory approach.
When we look at the definition of “safety” or “unsafe content” we are looking at an anticipatory or prospective consequence. This is incorporated in the phrase “risk of harm”. Thus the harm need not have occurred.
Once again the definition includes a highly subjective element – “if the content was experienced by a person”. The use of the word “experienced” should be avoided in this context.
Furthermore in a prospective situation the Discussion Paper acknowledges that everyone’s risk profile is different and that safeguards can be put in place to help reduce risks.
This “unsafe content” anticipates that harm might occur. This is quite different from the situation where harm has occurred and a remedy is sought. Although there are elements of law that are designed to reduce the likelihood or risk of harm – say from a badly manufactured tool or appliance – to apply that model to the communication of information is fraught with problems.
In my view it would be extremely difficult to bring a risk of harm within a section 5 NZBORA analysis unless it was clearly demonstrable that harm would occur. The best example is in the use of objectionable as a threshold for interference under the Films Videos and Publications Classification Act in respect of which there is a gateway under section 3(1) of that at – see Living Word Distributors v Human Rights Action Group [2000] 3 NZLR 570.
The issue of risk of harm is the subject of a graphic table which appears at page 50 of the discussion document. This classifies the risk of harm from low to extreme and suggests various interventions which may apply to each level.
The question that this raises is whether or not the proposed framework will be applicable to ALL levels of risk of harm or whether interventions will only apply to the most severe risks of harm. It is apparent from the material on page 50 that the former proposition seems to be applicable.
This introduces grave difficulties in establishing the level of risk. One problem that arises is whether a subjective or objective test should be applicable or whether, like the test in section 22 HDCA a mixed objective\subjective test should apply.
In addition there is a difficulty in ascribing the level of risk and how it is to be assessed. Simply to leave the matter as a low risk of harm or an extreme risk of harm lacks clarity and certainty. Both those elements are essential when it comes to an interference with the right of freedom of expression.
One way of approaching the matter may be to introduce a foreseaability test so that the harm the subject of the risk must be foreseeable. In tort law the word foreseeable is often preceded by the word reasonably and a “reasonably foreseeable” risk introduces an objective test.
A further issue becomes apparent. At what level of risk of harm should the law intervene. The lower levels of risk that are set out on page 50 of the Discussion Paper and the remedies that are suggested are low level indeed and hardly justify the intervention of the State. Indeed it could be suggested that at the two lower levels the interference with content creation and dissemination is invasive and indicative of a “nanny State” approach. This undermines the integrity of the process and public acceptance of it.
A prospective risk of harm approach may be perfectly acceptable for problems in consumer appliances or buildings which are the subject of clear and well-understood design and engineering principles. The inability to properly crystallise what in fact amounts to risk of harm makes this approach suspect, unclear, uncertain and in some difficulty in measuring up against the guarantees of freedom of expression in NZBORA
Therefore the use of unsafe content and the prospective or anticipatory approach should be abandoned and a retrospective actual harm approach should be adopted.
Soft Rule-Making
It will have become clear from the discussion of the definitions of harm and risk of safety that distancing the rule-making powers from the legislature and from legislative scrutiny creates a number of problems.
Under the HDCA the various principles and tests have been embedded in legislation. What is suggested in the Paper surrounding the settling of Codes will not be. These will be settled by the various industry bodies and/or the Regulator.
They will not be subject to legislative scrutiny nor to the oversight of the Attorney General under section 7 NZBORA.
The establishment of Codes is an example of soft lawmaking and it is unclear if this rule making power will be one granted by legislation or be a delegated one.
Although it is understandable that the creation of a censorship regime should be distanced from the partisan political process, the suggestion of distance from the involvement of the legislative process raises matters of concern in terms of objective scrutiny.
An example can be seen in the proposals for Code creation. It is suggested that Codes would be settled by industry bodies in tandem with the Regulator and the final form of the Code would be subject to the approval of the Regulator. Thus, the Regulator has the final say on the shape of any proposed Code.,
If, however, a Code cannot be created by industry or platform bodies or there is an absence of agreement about the Code, the Regulator will go ahead and settle a Code regardless. This would amount to the imposition of a Code of Practice not by legislative enactment subject to Select Committee and Parliamentary Counsel scrutiny, but by unfettered, untrammeled regulatory fiat.
In addition it is proposed that there should be different Codes to cover different forms of Content Platforms – online\social media, gaming and professional media.
These Codes could well set different standards for different platforms which makes compliance with such rules confusing and uncertain. If there were to be Codes – and that is not in my view justified – there should be one set of standards applicable to all platforms in much the same way as the HDCA targets electronic communications in a way that is platform neutral or non-specific.
The Discussion Document points to Codes that are in place already – especially those operated by the Broadcasting Standards Authority, the Advertising Standards Authority and the New Zealand Media Council. The BSA is legislatively authorized to develop Codes. (Sections 2, 4(1)(e), 21(1)(f) and 21(1)(g) Broadcasting Act 1989) The Media Council operates on a Code inherited from the Press Council which came into effect by agreement between media organisations in 1972. Likewise with the Advertising Standards Authority which is a voluntary organization.
It is to be noted that section 21(6) of the Broadcasting Act 1989 makes it clear that a code of broadcasting practice under section21(1)(f) or (g) is secondary legislation (as defined in section 2 of the Legislation Act 2019 and is subject to the publication requirements set out in Part 3 of that Act.
It is perhaps significant that advertising and news media have arrived at their Codes without any State interference. There are strong historical and economic reasons for State supervision of broadcasting and I shall deal with the issue of the importance of separate media treatment at item 10 of this commentary.
Suffice to say at this stage that news and broadcasting media occupy a discrete position in the media landscape that does not make them amenable to the “one size fits all” approach advanced in the Discussion Document.
Drifting Towards Increased Censorship – the Context
So what does all this tell us. The censorship regime of the Stuarts in the latter Seventeenth Century was about maintaining power by controlling information flows. Sir Roger L’Estrange and those who enforced the Licensing Act were well aware of the power of an information technology (the printing press) and the way that the spread of information – particularly contrarian information (and there was plenty of that in England at the time).
Of course it was quite a sensitive time for those in power. The memories of the English Revolution and the execution of the King were still fresh. There were times when the grip on power of Charles II was less than strong. His reign was characterized by a number of problems and difficulties. After his death his brother and successor James II was deposed as the English Revolution stuttered on. It was not until 1694 that the power elites were sufficiently confident that they could loosen censorship restrictions that they let the Licensing Acts lapse. These decisions were uncomplicated by concepts such as freedom of expression or freedom of the press.
The focus now has shifted. Freedom of expression and the associated freedom of the press are incorporated into international instruments and domestic legislation. Censorship regimes in the latter part of the twentieth century focused upon protecting the community from what was seen to be harmful, indecent and “immoral” material that offended against community standards. Indeed one of the leading advocacy groups before the then Indecent Publications Tribunal was the Society for the Promotion of Community Standards. It was founded in 1970 and is still active.
The approach then, as now, is more about the protection of society from what are considered objectionable. The current legislation represents continued attempts to given an objective criterion for determining whether something should be censored or not and there is already a centralized body to do that. The DIA proposals would increase that level of centralization. It is generally recognized that the current legislation has moved censorship in a more liberal direction consistent with changing community expectations and the influence of the New Zealand Bill of Rights Act 1990.
This does not mean that the desire for greater control of information has abated. Behind the censorship regime is a suggestion that those responsible for making censorship decisions know best for the community. The adoption of objective standards has the effect of reducing this protective and paternalistic approach, but it is still present nevertheless.
The appetite for control of information and for the marginalizing of contrarian points of view increased over the period of COVID pandemic. The government trumpeted that it was the sole source of truth. Contrarian views were dismissed and characterized as misinformation and disinformation.
Perhaps the most egregious example of the paternalistic approach of the censorship authorities were the activities of the then Chief Censor David Shanks. He called for a widening of the censorship brief which is reflected in the DIA proposals.
At an Otago University conference about ‘Social Media and Democracy’ in March 2021, Mr. Shanks told the conference the way we regulate media is not fit for the future.
As part of an overall review of regulatory structures surrounding the dissemination of harmful information the Government released a discussion paper on hate speech and at the same time the Chief Censor released a paper entitled “The Edge of the Infodemic: Challenging Misinformation in Aotearoa” which in essence is a survey about how citizens are concerned about misinformation. The internet and social media are identified as key sources – while experts and government are trusted more than news media.
The Chief Censor said it shows the need for urgent action. But the question must be asked – why? Do we need the government or some government agency to be the arbiter of truth? Are we so uncritical that we cannot discern misinformation from empirically based conclusions?
The concerns about new media are not new. Many of the criticisms of the Internet and social media levelled by the Chief Censor have been articulated in the past. Speaking of newspapers Thomas Jefferson expressed an acidic concern that editors “fill their newspapers with falsehoods, calumnies and audacities”.
The concerns that the “Infodemic” report advanced have were derived from an extensive survey that had been conducted. The findings of the survey lead inexorably to the conclusion that “something must be done” and I would suggest that the “something” involves the control or monitoring of information. And it must be of concern that the self and statutorily described censor was driving this.
But there seems to be a deeper issue and that surrounds calls that have been made to regulate the Internet or at least impose some restraints on the activities of social media platforms. Part of the problem with social media platforms is that they allow for a proliferation of a variety of opinions or interpretations of facts which may be unacceptable to many and downright opposed to the beliefs of others.
Governments and politicians, although they are great users of social media platforms, cannot abide a contrary message to their own. In a democracy such as New Zealand it is something with which they must live although there is little hesitation at nibbling away at the edges of expressions of contrary opinions. There still seems to be an element of control of information that is emerging from the past – perhaps the ghost of Sir Roger L’Estrange still haunts the corridors of power.
Characterising contrary views as “misinformation” is a start down the road of demonisation of these points of view. At the same time, following the 15 March massacre, the Prime Minister of New Zealand instituted the “Christchurch Call” – an attempt to marshall international support for some form of Internet regulation. No laws have been passed as yet and social media organisations, seeing which way the wind is blowing, have made certain concessions. But it is, in the minds of many, still not enough.
In New Zealand the review of media regulatory structures lies behind the “misinformation” study along with the ill-considered and contradictory proposals about “hate speech”.
Conclusion
The assault on freedom of expression or contrarianism is not a frontal one – it is subtle and gradual but it is there nonetheless. It is my opinion that the real target of the Chief Censor’s “misinformation” study is not “misinformation” but rather the expression of contrary points of view – however misguided they might be. And that is a form of censorship and it is therefore not surprising that this move should come from the Chief Censor. The DIA seems to have followed this lead, advocating as it has a “safety” based prospective model.
This must give cause for concern in the current Discussion Document. In some respects the decisions about “acceptable” content will be several times removed from public gaze. Whereas the Films Videos and Publications Classification Act and the Harmful Digital Communications Act have both involved legislative scrutiny over the tests for objectionable or harmful content, the DIA proposals remove this to the Codes of Practice.
These will be developed not by the Legislature but by organisations representing platforms. If agreement cannot be reached on a Code the Regulator will decide their content. Once again – removed from Parliamentary scrutiny.
The lack of transparency behind the proposed content control and censorship suggestions should give grave cause for concern.
The suggestion is that the current law is not fit for purpose. That is an assertion frequently made, more frequently unsupported, to justify change when change is not in fact required. My contention is that the existing law works and has worked in the past. It need not be consigned to the scrap heap nor swallowed up into the maw of a centralized creation. A few tweaks to existing legislation are all that is required.
There are voices calling for increased control of information. Generally these voices come from those who do not like what they are hearing and want to silence the speakers. Sadly, although by having their say they are the beneficiaries of freedom of expression, they would, in another breath, deny it to others. That is not the essence of democracy.
[1] From Tom Pullar-Strecker “Tackling Harmful Content Never Going to be a Simple Disucssion” Stuff – 6 June 2023 https://www.stuff.co.nz/business/opinion-analysis/132218182/tackling-harmful-content-never-going-to-be-a-simple-discussion
Great piece, i hope it will be your direct submission to DIA perhaps?