Safer Online Services - A Thematic Commentary Part 2
Continuing the thematic overview of the DIA Safer Online Services and Web Platforms
Introduction to Part 2
In the first part of this series I began with an overview of some of the issues surrounding rule making in the new information communication paradigm. It cannot necessarily be assumed that old style rules and models will be applicable in a paradigmatically different technical environment. I have dealt with these issues at some length in my book Collisions in the Digital Paradigm – Law and Rulemaking in the Internet Age (Hart Publishing, Oxford 2017).
That was followed by the first two of nine separate discussion heads. The first is a high level commentary on the proposals in general. This is followed by a substantive discussion criticizing the absence of a proper and rigorous New Zealand Bill of Rights Act analysis.
In this second part of the commentary on the Department of Internal Affairs Safer Online Services Discussion Document I continue my consideration of the discussion heads identified.
The issue of harmful and on-line safety are the subject of examination. Essentially the distinction is between an objective or a subjective approach to viewing on-line content and whether the law is capable of providing a remedy for a range of subjective perceptions.
I then move to a discussion of the Codes and the concerns that must follow a model which proposes to regulate on-line content by the provision of Codes which have no legislative scrutiny.
This is followed by a discussion of whether or not a “product liability” model is appropriate for the regulation of content that has within it issues of freedom of expression. It is argued that this model is ill-suited to the approach proposed.
The Guiding Principle - Harmful or safety – actual or prospective
The document acknowledges that the terms “harm” and “safety” are used. The concept of “safe” or “safety” falls within the consumer protection model proposed and which itself has difficulties in the field of information communication. Keeping people safe involves the reduction of the risk of harm.
The definition of harmful is problematic. I include it here and then comment upon it.
Content is considered harmful where the experience of content causes loss or damage to rights, property, or physical, social, emotional, and mental wellbeing.
The use of the words “experience” of content is highly subjective and it is doubtful that it is in fact needed.
The element of content being causative of loss or damage to rights, property or physical, social emotional and mental wellbeing introduces some difficulties.
From the outset I acknowledge that content can have an effect upon emotional and mental wellbeing. In the Harmful Digital Communications Act harm is defined as serious emotional distress. I discuss this below.
I find it difficult that content in and of itself can be causative of loss or damage to rights, property or physical wellbeing. It may prompt action that results in loss or damage but in and of itself information is passive.
The example of loss of money arising from a fraudulent scam which originates from false or misleading information comes to mind. However the discussion document makes it clear that scams are not a target of regulation.
The definition clearly anticipates a particular actual consequence has occurred. In that respect it is retrospective. That falls within the concept of harm which engages the provisions of the Harmful Digital Communications Act.
In that Act remedies are available where a digital communication causes harm. There can be no doubt that the Act applies to platforms. They are involved in digital communications.
As I have said, harm is defined as “serious emotional distress”. It should be noted that it is not an offence nor actionable in the kinetic environment to say or write something that causes serious emotional distress. In that respect the Harmful Digital Communications Act (HDCA) is an example of “internet exceptionalism”.
There are various tests or yardsticks present in the HDCA which assist in assessing whether harm (as defined) has been suffered. For example in section 22 which creates the offence of causing harm by posting a digital communication three elements must be proven
a) A person must post a digital communication with the intention of causing harm
b) Posting the communication would cause harm to an ordinary reasonable person in the position of the victim
c) Posting the communication caused harm to the victim
Keep reading with a 7-day free trial
Subscribe to A Halfling's View to keep reading this post and get 7 days of free access to the full post archives.