Problems With "Safer Online Services"
A Critique of InternetNZ's Approach to the Department of Internal Affairs Discussion Paper
Introduction
Freedom of expression enables the exchange of ideas. It enables agreement. It enables disagreement. It enables the spread of ideas. It enables those who have not heard new ideas to hear and contemplate them.
The exchange of ideas has always been seen by some as dangerous or harmful. Different ideas about religious dogma prompted the most outrageous and at times horrifying responses. The development of the printing press allowed for the wider dissemination of ideas in a more permanent form than word of mouth and a greater number than handwritten or copied manuscripts. And the spread of ideas that threatened the established order became the subject of various attempts at censorship and elimination of those ideas – with limited degrees of success.
In the Twentieth Century particularly the censorship of ideas and “unacceptable” content was practiced by dictatorial totalitarian States – Hitler’s Germany, Franco’s Spain, Stalin’s Russia and more recently Putin’s Russia and Xi’s China. In all these regimes, the State attempts to control the message to some degree.
Which brings us to the Internet. A revolutionary communications system that is paradigmatically different (although in some respect with similarities) to other communications systems. But one of the innovations that sits behind the Internet is the quality of “permissionless innovation” – the ability of developers to bolt an application on to the Internet backbone and see how it flies.
Permissionless innovation is what allowed Internet-based companies like Google, Facebook, Twitter and numerous other communications platforms to get started, develop and grow.
But it must be remembered that these are communications systems. They are used to communicate ideas. They allow users to exchange information. Some of that information may be banal. Some of it may be intensely personal. Some of it may be interesting. Some of it may be confronting. Some of it may be intolerable to read. Some of it may make the reader or viewer feel uncomfortable.
Existing Content Control Provisions
Some content may be so objectionable that it should be censored, and there is a role for censorship even in the Internet space. That role is administered in New Zealand under the provisions of the Films, Videos and Publications Classification Act 1993. (FVPCA)
That legislation was enacted before the Internet went public. But it has managed to keep up. Various amendments have dealt with video classification, streaming content and the like. The legislation can even deal with digital text content as was demonstrated when the manifesto of the Christchurch Terrorist, Brenton Tarrant, was declared objectionable.
When content is declared objectionable the FVPCA Act creates certain offences for the possession of that content or for the distribution of it. And the penalties can be very severe.
The scope of the FVPCA is quite extensive. As its title suggests it covers film, video and publications but although it was enacted to deal with kinetic variants of these items, it applies with equal force to digital variants.
Because the Act gives the censor the power to ban content by deeming it objectionable, the powers to do so are circumscribed. Censorship of material is an extreme response and should be employed only in the most egregious of circumstances.
There are other ways of monitoring and controlling questionable content. The Broadcasting Act 1989 sets standards for broadcasters that are subject to its jurisdiction. There is a complaints system that has been set up and the Broadcasting Standards Authority hears and determines those complaints.
There is no legislatively created body that covers the Press. Formerly the Press Association, now the New Zealand Media Standards Association (NZMSA) is a voluntary body that entertains complaints about what could be broadly described as press content. All the main stream media organizations have subjected themselves to the jurisdiction of the NZMSA. There are advantages to doing so including the ability member organisations to access Parliament and the Courts to report proceedings.
Members of the NZMSA include some online content producers and bloggers. It is advantageous for them to be subject to its jurisdiction.
But not all online content producers are subject to the Broadcasting Act or the NZMSA. Some Internet based “radio” stations that do not use broadcast spectrum are not subject to the Broadcasting Act. And online platforms like Facebook, Google or their blogs and other content platforms are not subject to the NZMSA.
But this does not mean that they are above the law. The normal law relating to reputational harms is applicable to these platforms and to the producers of content. The law relating to harassment is applicable to those who use online services to harass others.
And there are the provisions of the Harmful Digital Communications Act 2015 that is applicable to any form of electronic communication that may be causative of harm (serious emotional distress) and that may breach any one of a number of communications principles set out in the Act. In addition, the most egregious examples of harmful communication, accompanied by an intention to cause serious emotional distress may be the subject of criminal proceedings.
The Safer Online Services and Media Platforms Proposal
But despite all these provisions in the law and the remedies that they provide the Department of Internal Affairs (DIA) has produced a Discussion Paper entitled “Safer Online Services and Media Platforms”.
This paper proposes a complete revamp of the regulation of content with a particular focus upon online content. It proposes to do this by requiring online platforms that qualify, and the qualification threshold is not high, to sign up to Codes of Conduct. These Codes of Conduct will provide for what is acceptable and unacceptable online content or behaviours.
The current provisions of the Broadcasting Act and the FVPCA will be part of the new proposal. Because so much of Mainstream Media is online, Mainstream Media will be subject to the proposals and will have to sign up to Codes of Conduct. As will radio and TV. Internet radio – as an online platform – will also be subject to the new proposals. The scope is very wide ranging indeed.
Although the DIA suggests that this is a platform regulatory proposal it is far more than that. It is designed to regulate content. And the type of content that will be regulated is not just objectionable content. It may include harmful or “unsafe” content. These terms are not clearly defined in the DIA proposal. And the risk for widely increased powers of censorship or content control over which there is no Parliamentary oversight nor public consultation is high, and poses a threat to freedom of expression.
In bringing the Discussion Document forward the DIA consulted with a number of organisations some of which are part of what is called the civil society. One of these organisations was InternetNZ.
InternetNZ manages the .nz domain name space and is dedicated to supporting a free and open Internet. InternetNZ supports the DIA proposals and has advocated that as many people as possible make their point of view known to the DIA.
I am a member of InternetNZ and have been for many years. I support its objective of a free and open Internet. I do not support its endorsement of the DIA proposals and because we live in a free and democratic society I feel obliged to express my disagreement and debate the issues.
The Position of InternetNZ
InternetNZ has made its position clear in two publications. On 15 July 2023 an article was published in Stuff headlined “A once-in-a-generation opportunity to make a safer internet”. The article was by Ms. Vivian Maidaborn, the Chief Executive of Internet NZ. The article does not disclose that Ms. Maidaborn was writing in her personal capacity so I can only assume that the views expressed are those of InternetNZ. I shall refer to that piece as “The Stuff Article.”
The second publication was on InternetNZ’s website. It is undated but the link was circulated in an InternetNZ newsletter during week commencing 17 July 2023.I shall refer to it as “The Website Article”
The Stuff Article
There are three main themes that make up the Stuff Article. Those themes focus upon
1. The inadequacy of the present law
2. That the DIA proposals present an opportunity for a “safer Internet”
3. That the proposals do not have a negative effect upon freedom of expression.
I shall take each of these themes, present the portion of the article I wish to debate and present my comments which are in italics.
The Inadequacy of the Present Law
In the Stuff Article the following claims are made:
Right now, when we see harmful online content, like bullying, harassment, violence, racism, and extremism, we don't have reliable ways to report it and get it investigated. This is why regulation is so important.
This may be the most important opportunity in a generation to create a safer Internet. It’s hard to fathom that the Broadcasting Act from 1993 is what governs our regulation of social media and other online content. An Act that was written before New Zealand even had its first internet service provider. Or that our censorship and classification laws were developed before there was online content and people could only access it by going to the movies, reading a paper, or visiting a video store.
The inadequacy of the present law theme is one that is advanced by the DIA but other than saying that the introduction of the various pieces of legislation predated the Internet no other relevant or persuasive argument is put forward.
The suggestion ignores the fact that there have been amendments to the FVPCA and to the Code of Conduct under the Broadcasting Act to keep up to date. Should further amendments be necessary that can take place.
But beyond repeating the DIA position the statement in the Stuff Article is incorrect.
The Harmful Digital Communications Act 2015 provides civil remedies to take down all this content and criminal prosecution if people post content intending to cause serious emotional distress and such distress is caused.
The reliable way to report it is either to the Police or to Netsafe. We already have regulatory tools available. The model that is proposed by the Department of Internal Affairs is not necessary. The present laws are perfectly adequate.
The HDCA could probably do with some modifications such as the establishment of a Communications Tribunal as was originally proposed by the Law Commission and the inclusion of an opportunity for groups to apply for relief. I shall discuss a possible model based on the HDCA in a future article
That the DIA Proposals Present an Opportunity for a “Safer Internet”
The following remarks appear in the Stuff Article:
Well-designed regulation can actually protect freedom of expression by making participation on platforms safer for everyone, especially marginalised and at-risk groups. Those disproportionately affected by online discrimination and abuse, which discourages them from participating, could have their freedom of expression enhanced.
Through our research, we also know that these groups are put off from engaging online because of the huge number of deterrents they face relating to their safety. Māori, for example, are particularly impacted by harmful content and are more likely to disengage. Is it vital that, as a Treaty partner, the Government engages and develops this regulation alongside Māori. The internet should be safe for everyone – not just safe for most.
Harmful content online such as misinformation is also threatening the safety of New Zealanders. It is the platforms that hold the key to fixing this situation but so far it is clear they will act most decisively when they are pushed to it through regulation.
Recently, we’ve seen increased misogyny, transphobia, xenophobia, and hate speech. This could be minimised by making the platforms take responsibility for it.
The first and final paragraphs quoted above deal with aspects of freedom of expression as well as “the safe internet”. The suggestion is that there may be an unwillingness to engage in open discussion on line because the atmosphere or environment is perceived by the user as unsafe.
This overlooks or ignores the fact that not everything that one reads, not everything that is discussed is polite and anodyne even although we may wish it to be so. Often there are disagreements. Occasionally there may be hostility. There may even be speech we hate to hear - as opposed to speech which incites physical violence based on characteristics which is what hate speech really is.
Of course the Internet should ideally be free and open to everyone. But the reality is that not everyone wants to engage. In the same way that not everyone wants to read the same book or watch the same movie, not everyone may want to engage with online platforms. It is a matter of choice.
There is a reference in the passages above to misinformation. This is a slippery word, often bandied about but indiscriminately. What often is described as misinformation is another person’s opinion. What often is described as misinformation is contestable information. Misinformation, in and of itself, does not automatically justify a wider ranging censorship regime. A preferable solution maybe to increase education in the field of critical analysis of information that is presented - essentially I am arguing for a more informed public.
The platforms are being held responsible for all the supposed “unsafe” elements of the Internet. It is in this area of “safety” that I have some difficulty with the proposals in the Stuff Article. Safety is not defined. What I think is being talked about is “risk free”. But that is an unrealistic objective? Nothing in life is risk free. Or is what is being discussed and proposed a form of risk management? That leads into a discussion about the nature of safety versus harm as threshold for regulation. This is a complex topic. In essence I see safety as a prospective concept and harm as a retrospective one. This is my explanation.
The discussion document acknowledges that the terms “harm” and “safety” are used. The concept of “safe” or “safety” falls within the consumer protection model proposed and which itself has difficulties in the field of information communication. Keeping people safe involves the reduction of the risk of harm.
The definition of harmful is problematic. I include it here and then comment upon it.
“Content is considered harmful where the experience of content causes loss or damage to rights, property, or physical, social, emotional, and mental wellbeing.”
The use of the words “experience of content” is highly subjective and it is doubtful that it is in fact needed.
The element of content being causative of loss or damage to rights, property or physical, social emotional and mental wellbeing introduces some difficulties.
From the outset I acknowledge that content can have an effect upon emotional and mental wellbeing. In the Harmful Digital Communications Act harm is defined as serious emotional distress. I discuss this below.
I find it difficult that content in and of itself can be causative of loss or damage to rights, property or physical wellbeing. It may prompt action that results in loss or damage but in and of itself information is passive.
The example of loss of money arising from a fraudulent scam which originates from false or misleading information comes to mind. However the Discussion Document makes it clear that scams are not a target of regulation.
The definition clearly anticipates a particular actual consequence has occurred. In that respect it is retrospective. That falls within the concept of harm which engages the provisions of the Harmful Digital Communications Act.
In that Act remedies are available where a digital communication causes harm. There can be no doubt that the Act applies to platforms. They are involved in digital communications.
As I have said, harm is defined as “serious emotional distress”. It should be noted that it is not an offence nor actionable in the kinetic environment to say or write something that causes serious emotional distress. In that respect the Harmful Digital Communications Act (HDCA) is an example of “internet exceptionalism”.
There are various tests or yardsticks present in the HDCA which assist in assessing whether harm (as defined) has been suffered. For example in section 22 which creates the offence of causing harm by posting a digital communication three elements must be proven
a) A person must post a digital communication with the intention of causing harm
b) Posting the communication would cause harm to an ordinary reasonable person in the position of the victim
c) Posting the communication caused harm to the victim
From this it is clear that there is a mixed objective and subjective test. The likelihood of serious emotional distress is measured against whether the communication would cause serious emotional distress to an ordinary reasonable person [the objective element] in the position of the victim [the subjective element]
In assessing whether a post would cause harm a court may take into account a number of factors listed in section 22(2) which are non-exclusive. These factors are:
(a) the extremity of the language used:
(b) the age and characteristics of the victim:
(c) whether the digital communication was anonymous:
(d) whether the digital communication was repeated:
(e) the extent of circulation of the digital communication:
(f) whether the digital communication is true or false:
(g) the context in which the digital communication appeared.
The HDCA also provides a framework for remedial action in the case of electronic communications that do not meet the threshold to bring the communication within the scope of section 22.
To qualify for remedial orders which are set out in section 19 HDCA and which include take down of the material there must be harm cause and a breach of one or more communications principles that are set out in section 6 of the Act.
These principles are:
Principle 1 A digital communication should not disclose sensitive personal facts about an individual.
Principle 2 A digital communication should not be threatening, intimidating, or menacing.
Principle 3 A digital communication should not be grossly offensive to a reasonable person in the position of the affected individual.
Principle 4 A digital communication should not be indecent or obscene.
Principle 5 A digital communication should not be used to harass an individual.
Principle 6 A digital communication should not make a false allegation.
Principle 7 A digital communication should not contain a matter that is published in breach of confidence.
Principle 8 A digital communication should not incite or encourage anyone to send a message to an individual for the purpose of causing harm to the individual.
Principle 9 A digital communication should not incite or encourage an individual to commit suicide.
Principle 10 A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability.
In deciding whether or not to make a remedial order section 19(5) requires the Court to take into account the following:
(a) the content of the communication and the level of harm caused or likely to be caused by it:
(b) the purpose of the communicator, in particular whether the communication was intended to cause harm:
(c) the occasion, context, and subject matter of the communication:
(d) the extent to which the communication has spread beyond the original parties to the communication:
(e) the age and vulnerability of the affected individual:
(f) the truth or falsity of the statement:
(g) whether the communication is in the public interest:
(h) the conduct of the defendant, including any attempt by the defendant to minimise the harm caused:
(i) the conduct of the affected individual or complainant:
(j) the technical and operational practicalities, and the costs, of an order:
(k) the appropriate individual or other person who should be subject to the order.
The HDCA is a piece of legislation that addresses and interferes with the freedom of expression. Section 6(2)(b) HDCA requires a Court to act consistently with the rights and freedoms contained in the New Zealand Bill of Rights Act 1990. That means that any interference with freedom of expression must be subject to the justified limitation test contained in section 5 NZBORA.
The second thing is that all of the tests, restrictions, limitations and definitions that are in the Act have been the subject of legislative scrutiny. Indeed the Act derived from a Ministerial Briefing Paper authored by Professor John Burrowes and Ms Cate Brett of the Law Commission and upon which I consulted. Although the Communications Principles may have the flavour of a Code, they have all been the subject of legislative examination and scrutiny. They are the subject of an Act of Parliament and not as a result of a delegated rule-making power.
The final point is that the harm that is the subject of the Act is largely retrospective – that is the harm must have been suffered before the provisions of the Act can be engaged. This is consistent with the law addressing acts that have a consequence rather than adopting an anticipatory approach.
When we look at the definition of “safety” or “unsafe content” we are looking at an anticipatory or prospective consequence. This is incorporated in the phrase “risk of harm”. Thus the harm need not have occurred. A prospectivr consequence that requires censorship or takedown is what is called “prior restraint”.
Once again the definition includes a highly subjective element – “if the content was experienced by a person”. The use of the word “experienced” should be avoided in this context.
Furthermore in a prospective situation the Discussion Paper acknowledges that everyone’s risk profile is different and that safeguards can be put in place to help reduce risks.
This “unsafe content” anticipates that harm might occur. This is quite different from the situation where harm has occurred and a remedy is sought. Although there are elements of law that are designed to reduce the likelihood or risk of harm – say from a badly manufactured tool or appliance – to apply that model to the communication of information is fraught with problems.
In my view it would be extremely difficult to bring a risk of harm within a section 5 NZBORA analysis unless it was clearly demonstrable that harm would occur. The best example is in the use of objectionable as a threshold for interference under the Films Videos and Publications Classification Act in respect of which there is a gateway under section 3(1) of that at – see Living Word Distributors v Human Rights Action Group [2000] 3 NZLR 570.
The issue of risk of harm is the subject of a graphic table which appears at page 50 of the discussion document. This classifies the risk of harm from low to extreme and suggests various interventions which may apply to each level.
The question that this raises is whether or not the proposed framework will be applicable to ALL levels of risk of harm or whether interventions will only apply to the most severe risks of harm. It is apparent from the material on page 50 that the former proposition seems to be applicable.
This introduces grave difficulties in establishing the level of risk. One problem that arises is whether a subjective or objective test should be applicable or whether, like the test in section 22 HDCA a mixed objective\subjective test should apply.
In addition there is a difficulty in ascribing the level of risk and how it is to be assessed. Simply to leave the matter as a low risk of harm or an extreme risk of harm lacks clarity and certainty. Both those elements are essential when it comes to an interference with the right of freedom of expression.
One way of approaching the matter may be to introduce a foreseaability test so that the harm the subject of the risk must be foreseeable. In tort law the word foreseeable is often preceded by the word reasonably and a “reasonably foreseeable” risk introduces an objective test.
A further issue becomes apparent. At what level of risk of harm should the law intervene. The lower levels of risk that are set out on page 50 of the Discussion Paper and the remedies that are suggested are low level indeed and hardly justify the intervention of the State. Indeed it could be suggested that at the two lower levels the interference with content creation and dissemination is invasive and indicative of a “nanny State” approach. This undermines the integrity of the process and public acceptance of it.
A prospective risk of harm approach may be perfectly acceptable for problems in consumer appliances or buildings which are the subject of clear and well-understood design and engineering principles. The inability to properly crystallise what in fact amounts to risk of harm makes this approach suspect, unclear, uncertain and in some difficulty in measuring up against the guarantees of freedom of expression in NZBORA
Therefore the use of unsafe content and the prospective or anticipatory approach should be abandoned and a retrospective actual harm approach should be adopted.
It is for those reasons that I disagree with a “safety” based prospective risk avoidance approach in favour of the actual harm approach discussed above.
That the Proposals Do Not Have Negative Effect upon the Freedom of Expression
I have already made reference to the way in which freedom of expression has been co-joined with the concept of risk avoidance or safety.
A passage from the Stuff Article confronts the issue directly and states as follows:
But what about freedom of speech? Our research showed 59% of people living in Aotearoa are either extremely, or very concerned, that the internet is used as a forum for extremist material and hate speech. The proposal from the Department of Internal Affairs is aware of the need for balance here, and says that freedom of expression “should be constrained only where, and to the extent, necessary to avoid greater harm to society”.
The quote from the DIA does not properly express the test that is required to provide a justified limitation of the rights contained in the New Zealand Bill of Rights Act – especially section 14 which guarantees the freedom of expression.
At no stage of the Discussion Document did the DIA carry out a proper analysis of the necessity test under section 5 NZBORA. This is a significant failing in the Document and also is absent from the InternetNZ approach.
Furthermore it is easy to characterize content as “extremist” or “hate speech” but in the minds of many – including one leading academic – hate speech is speech that you hate to hear. Freedom of expression does not just apply to anodyne speech. It encompasses the right to express ideas that the listener finds discomforting or confronting. In that respect it may seem extreme when in fact it is not. The paragraph quoted demonstrates what I have on another occasion referred to as a relativistic approach to the freedom of expression which seems to be present in many areas of the debate in New Zealand.
I develop the argument in this way.
There is no clear identified necessity for the proposals in the Discussion Document demonstrating a requirement for change. Such changes that may be necessary – such as a more responsive system – do not require a wholesale restructure of media regulation and an increased scope of a censorship model.
Beyond the preference for centralization there is no identified purpose. Apart from a brief mention at p. 69 of the discussion paper about the importance of rights and freedoms there is no detailed discussion of the analysis required to bring the proposals within section 5 of the New Zealand Bill of Rights Act 1990.
The discussion paper is somewhat vague about and identified need and purpose other than addressing harmful content and providing a “safer” online experience for users.
The use of these words seems to suggest the consumer protection model that is mentioned at an early stage of the document. I would observe that such an approach is probably more suited to a product liability model such as consumer goods. The “product” that is made available online is information.
The term ”consumer protection” seems to have been deployed to justify the development of a Code based approach to a content and platform regulatory framework.
Although the thrust of the model is claimed to be the regulation of platforms, to limit the enquiry in this way is disingenuous. It is quite clear as one reads the discussion document that content providers and content authors will in some way be responsible under the model or may have their access to platforms the subject of interference. In addition, the proposals ignore the reality of the Internet.
The Internet is a communications system. Properly stated, the word “internet” describes the backbone. “Bolted on” to the backbone are platforms and other information services that are limited only by the operating protocols that have been designed by the various bodies responsible for setting Internet standards.
What is communicated via the Internet and its various platforms and protocols is information. Any interference with the way in which that information is conveyed engages an enquiry about whether that interference is justified.
The way in which the proposal is expressed suggests a distance between the primary rule-making body for New Zealand – the Parliament – and the development of codes by the Regulator and the various platform providers. The fact that the proposals give the regulator the power to settle Codes him or herself means that the way in which content may be regulated or judged “harmful” or “unsafe” is separated from Parliament.
I make this point because under the provisions of the Films, Videos and Publications Classification Act 1993 (erroneously referred to in the Discussion Document as the Classification Act) makes it clear that Parliament has clearly defined “objectionable” in section 3.
In so doing Parliament would have been aware of the fact that such a definition would constitute a limit on freedom of expression under the New Zealand Bill of Rights Act 1990 but that the definition constituted a reasonable limit that could be demonstrably justified in a free and democratic society.
The problem that is encountered by the provision of various Codes is that the level of NZBORA scrutiny is not present as it would be for Parliamentary legislation with attendant protections provided by Select Committee processes and the obligation on the Attorney-General to report to Parliament where a provision of a Bill may be inconsistent with NZBORA.
The settling of Codes amounts to a process of “soft” or departmental rule making that might have significant consequences both for platform operators, for those using them and for the creators of content. The level of scrutiny present for legislation is significantly compromised by these proposals.
Although the discussion document mentions – almost in passing – the importance of the rights and freedoms guaranteed by NZBORA – see Page 69 – there is no detailed analysis which clearly articulates why the rights and freedoms under NZBORA – and especially the right to impart and receive information under section 14 of NZBORA (a right which importantly emphasizes the two way flow that constitutes the communication of information) - should be limited.
There is no discussion as to why, if there are to be limitations that those limitations should be imposed by soft or departmental rule-making rather than the legislature. There is no identification of the elements that would justify a limitation of the freedom of expression above and beyond which is already permissible by law. There is no consideration that those limitations can be demonstrably justified in a free and democratic society.
The level of analysis necessary in a consideration of a limitation on freedom of expression has been clearly set out in the case of Moonen v Film and Literature Board of Review [2000] 2 NZLR 9 (CA) where it was held that censorship provisions must be interpreted so as to adopt such tenable construction as constitutes the least possible limitation on the freedom of expression.
Censorship legislation is an abrogation of the right to freedom of expression, the rationale being that other values predominate, and it is inevitable that that in a censorship context some limitation would be placed on freedom of expression.
The Court of Appeal in Moonen proposed a five step approach to be followed when weighing the relevant provisions of the Bill of Rights with the censorhip legislation although it did point out that other approaches could be used. No such analysis has been attempted or considered.
There can be no doubt that the provisions of the Codes, if adopted, will be the subject of litigation and scrutiny by the Courts, especially if, by their operation, they constitute an unreasonable and unjustifiable limitation on the freedom of expression.
I shall now turn to the Website Article
The Website Article
The Website Article states as follows:
The Safer Online Services and Media Platforms document asked for feedback on a supportive (focus on collaboration and partnership with industry) or prescriptive (more directive and stronger powers of the independent regulator) approach. InternetNZ supports a prescriptive approach to this regulation instead of a supportive one. A supportive approach has been shown in other countries not to work, don't have enough incentive to implement policies that minimise harm. An approach that gives platforms more latitude to regulate themselves has proven ineffective, both abroad and here in Aotearoa, as evidenced by the Code of Practice for Online Safety and Harms. A prescriptive approach will provide the independent regulator more power to ensure compliance. Jurisdictions that use a prescriptive approach include the European Union.
It is InternetNZ’s opinion that there is harmful content that does not, but should meet the threshold to be classified as ‘objectionable’. We believe that if takedown powers were expanded to include other laws, such as incitement laws under the Human Rights Act, then those communities most affected by harmful content would be better protected. We don’t think all illegal content should be in this basket. However, there may be some (such as infringing copyright material) that does not rise to the level necessitating this remedy.
We would like to see a structure that includes embedded and resourced input from the communities most affected by harmful content and legal, tech, and subject matter experts. We would like to see a structure that includes a separate recourse entity to objectively assess the Regulator's decisions.
The first paragraph refers to a prescriptive approach to regulation rather than a supportive one. My objection to this proposal is covered above in the discussion about “soft lawmaking”. The prescriptive approach has elements of state diktat to it which runs into difficulties with the freedom of expression and the moderation of communications platforms.
Reliance is placed upon the EU model but what must be remembered is that the legal traditions and values that underpin the Continental system are quite distinct from those that underpin the Anglo-American and Westminster systems. Traditionally in Continental systems there is a greater role played by the State and one must remember that censorship regimes in Europe, historically, were far more severe than those in England. One must be careful in buying in, uncritically, to a model that may have many differing philosophical underpinnings to our own.
It is the second paragraph above that is particularly disturbing. I repeat it here:
It is InternetNZ’s opinion that there is harmful content that does not, but should meet the threshold to be classified as ‘objectionable’. We believe that if takedown powers were expanded to include other laws, such as incitement laws under the Human Rights Act, then those communities most affected by harmful content would be better protected. We don’t think all illegal content should be in this basket. However, there may be some (such as infringing copyright material) that does not rise to the level necessitating this remedy.
Basically what is proposed is that there should be a form of “unlawful” content that justifies takedown but does not meet the threshold for “objectionable” content although the wording is ambiguous. The suggestion is that the content does not presently come within the definition of objectionable. It goes on to suggest that the definition of objectionable content should be expanded to include this currently unclassified content. The suggestion is that not all illegal content should fall within the scope of the proposal. Infringing copyright material should be excluded. (There are remedies under the Copyright Act but I assume that the writer of the copy is unaware of those)
This is a direct threat to freedom of expression for two reasons.
First it widens the scope of what is objectionable (illegal) material. As I have already discussed, there has been examination by both Legislature and the Courts of what is needed for content to qualify as objectionable. What is proposed by InternetNZ is a significant expansion of that definition. It is unclear whether this type of material would be included in the Codes of Conduct (soft lawmaking) or in Statute.
Secondly, takedown is no more and no less than a form of cancelling speech. There are circumstances where material may be taken down. The Harmful Digital Communications Act (discussed below) is one means. The provisions of the Contempt of Court Act provide another.
I make the following observations about takedown orders, referring to the Discussion Document.
At Page 56 is the heading “The Regulator should have the power to order a platform to take down illegal material”
An example follows at para 106
“For example, under the current regime if someone was convicted of a threat to kill delivered publicly online, the online threat is unlikely to meet the threshold of being ‘objectionable’ and the current takedown power would be unavailable if a platform chose not to remove it.”
Although the content may not be “objectionable” a threat to kill still amounts to an offence under either the Crimes Act or the Summary Offences Act.
In such a case the person responsible could be prosecuted and a Court could make a takedown order collateral to prosecution – possibly as a condition of bail to ensure that there is no repeat offending.
In the event that the author of the threat could not be identified, there would still remain a takedown power under the provisions of section 19 of the Harmful Digital Communications Act.
Paragraph 107 states:
“Similarly, offences under the Harmful Digital Communications Act, such as online bullying and harassment, would likely not meet the current threshold for a takedown notice issued by the Department of Internal Affairs (although the District Court can potentially order a takedown under that legislation).”
Both of these paragraphs demonstrate the absence of a takedown power on the part of the Department of Internal Affairs.
As matters stand take down powers have been regulated by statute. A reason for this is that there are freedom of expression implications which the authors of the Discussion Document have overlooked.
Under paragraph 107 it is incorrect to say that a Court may potentially make a take down order. Under section 19 the Court is empowered to make a takedown order.
It is clear that the takedown proposals are designed to widen the scope of powers available to the Regulator in respect of material that may not be objectionable but may be illegal or unlawful under other statutes apart from the FVPCA – this is a significant overreach of censorship powers and should be vested in the Courts or a Communications Tribunal.
The final paragraph of the Website Article contains a recommendation with which I whole-heartedly agree.
If this new censorship model is to be adopted, which it is hoped will not happen, then there should be “a structure that includes a separate recourse entity to objectively assess the Regulator's decisions.”
The language is obtuse but what the suggestion seems to be is that the Regulator’s decisions will be subject to review by the Courts – described as a “separate recourse entity”. That would ensure that objective assessment of the Regulator’s decisions could take place within a context of a proper and rigorous Bill of Rights Act assessment.
Conclusion
There are some jursidictions where this discussion would not be possible. We are fortunate that we enjoy the benefits of a liberal democracy and a statutorily guaranteed freedom of expression so that we can have this debate. We do not and will not agree on everything. But we can debate matters. We can attempt to persuade others to our point of view. We can place our views within the marketplace of ideas to see if they have a currency. And this all flows from an open and robust ability to engage in discussion.
The Internet facilitates those discussions. It is not without risk. But it has allowed for the true democratization of debate and discussion. To interfere with that in the manner proposed by the DIA and supported by InternetNZ would be an unfortunate outcome for open discussion and for the freedom of expression to express and to hear ideas that we currently possess.
I truly despair that the public debate falls far short of your arguments here.
I fail to see how censorship can do anything but increase divisions.
It is a simple matter to get a VPN and download tor browser and take oneself off to some obscure 8chan echo chamber and talk the sort of poison that lead to the Christchurch massacre.
Surely we can all see that echo chambers (whatever the flavour of echo) are one of the sources of division. We are surely better to hear the speech we find objectionable, to understand it, to quantify it, to know who is saying it then to pretend it doesn't exist and hope that hate will magically dissappear if we don't hear it spoken.
Thanks for this excellent analysis.
Thank you for this enlightening essay.
This would be the very thin end of a very large wedge. First case likely to be one of the usual minorities screaming that their feelings have been hurt. Then a pile on by the other usual actors and before you know it, only expressions of opinion approved by the Government will be deemed as safe.
As per your well researched written article, the law could be tidied up but not one inch of further intrusion allowed. The world is a tough place however, we don’t need a self appointed regulator to keep us ‘safe’.
Keep up the good work.