During week commencing 29 April 2024 the Department of Internal Affairs released a summary of responses that it had received about its proposals contained in the discussion paper “Safer Online Services and Media Platforms”. I have written on the discussion paper within the context of regulation of Internet platforms and some of the issues confronting those who would try to regulate aspects of the Digital Paradigm.
The centrepiece of the reforms would have been a new regulator, at arms-length from the Government, that would have had the power to fine large social media and internet-based content platforms for breaching rules designed to tackle harmful content.
Media firms would have had to abide by mandatory codes of practice also overseen by the new regulator, replacing self-regulatory schemes such as those run by the Media Council and the Broadcasting Standards Authority.
The regulator would have been given the power to force online platforms to take down more kinds of illegal content, such as threats to kill people.
According to Internal Affairs the overall objective of the initiative was to “improve consumer safety for all New Zealanders”,
The summary of responses made interesting reading. 20,281 were received from individuals and organisations.
The key themes that emerged from the submissions were:
• the need for clearer definitions,
• the role of the regulator in code development,
• inclusion of diverse stakeholders,
• focus on online platforms and social media,
• protection of freedom of expression,
• recognition of children and young people as stakeholders,
• accessible complaints process,
• adequately funded education initiatives, and alignment with existing frameworks and international policies.
There were concerns about the regulatory framework which echoes some of the themes.
Threats to Freedom of Expression:
Concerns about potential infringement upon freedom of expression, censorship, and restriction of public discourse.
Lack of Clarity and Definitions:
Need for clearer definitions of terms like "harmful content" and "platforms" to avoid ambiguity and inconsistent enforcement. Problem with “harmful content” and references to “safety” suggesting a reduction or possible elimination of risk.. Definition of content requires more work
Regulatory Capture:
Concerns about dominant industry players shaping regulations to their advantage and suppressing competition.
Compliance Burden on Smaller Platforms:
Worries that compliance requirements may disproportionately burden smaller platforms, leading to reduced competition and barriers to entry.
Lack of Democratic Oversight:
Concerns about the lack of democratic oversight in the regulatory framework and the potential for unaccountable regulatory bodies.
Impact on International Platforms:
Challenges of enforcing regulations on international platforms, potentially disadvantaging local media platforms and reducing diversity.
A Prevention-Focused System?
Some submitters emphasized the need for a prevention-focused system because they believe that proactive measures are essential in addressing the issues of harmful content and online safety.
They argue that by focusing on prevention, it is possible to create a safer online environment for all users, particularly vulnerable populations such as children and young people.
A prevention-focused system aims to identify and mitigate risks before they cause harm, rather than solely relying on reactive measures after harm has occurred.
This approach aligns with the goal of promoting a culture of respect, consent, and empathy online and can help safeguard the welfare and well-being of individuals.
By implementing preventive measures, such as education and awareness initiatives, early intervention, and promoting responsible online behavior, the aim is to minimize the occurrence of harmful content and behaviors in the first place.
There were also concerns about what amounted to harm. These concerns included:
Subjectivity: Submitters argued that the definition of 'harm' is subjective and lacks clarity, potentially leading to inconsistent decision-making and restricting freedom of expression.
Overreach: Some submitters were concerned that the definition of 'harm' is too broad and could encompass content that may not necessarily be harmful, potentially restricting legitimate speech.
Differentiation from illegal content: Submitters highlighted the importance of distinguishing between harmful content and illegal content, emphasizing the need to address illegal content while preserving freedom of expression for legal but potentially harmful content.
Lack of specificity: Submitters believed that the definition of 'harm' lacks specificity and fails to identify specific types of harm, suggesting that a more detailed and nuanced definition would provide clarity and ensure effective targeting of intended harms.
Lack of clarity and uncertainty: Submitters argued that the definitions of harm are subjective, lack clarity, and are ambiguous, leading to uncertainty about what content will be deemed harmful or unsafe.
Implications for free speech: Concerns were raised about the potential chilling effect on free speech and public discourse due to the subjective definitions of harm, leading platforms to restrict a wide range of content to avoid potential breaches.
Inconsistent enforcement: Submitters argued that the lack of clarity and specificity in the definitions of harm can result in inconsistent enforcement and arbitrary decision-making by the regulatory body.
Impact on truth and expression:Some submitters expressed concerns that the proposed regulations do not provide a defense of "truth," potentially criminalizing factually accurate content and impacting freedom of expression.
Lack of inclusion of psychological harm: Submitters highlighted the omission of psychological harm in the definition of harm, emphasizing the need to explicitly include it to protect individuals from cyberbullying, harassment, and exposure to explicit or disturbing material.
Content encouraging self-harm: Concerns were raised about the lack of clarity regarding content that encourages self-harm, suggesting the need for explicit regulation of content that promotes or glorifies self-harm behaviours.
According to the document, the majority of individual submissions (approximately 438 out of 667) opposed the proposals or opposed certain aspects of the proposals. These submitters expressed concerns about the protection of free speech and the potential narrowing of people's right to freely express themselves.
On the other hand, approximately 28 individual submissions showed support for the proposals. These submitters emphasized the need to minimize content harm, especially for children and young people, and advocated for a prevention-focused system.
Approximately 202 submissions did not specify support or opposition for the proposals. While these submitters agreed on the need for a new regulatory framework, they also mentioned that the proposals needed further refinement and stronger definitions to have a more significant impact.
While I was analysing the submissions report I tried to access the website which references all he consultation documents. It was not at the usual address. The materials had been relocated. This seemed strange but I thought nothing of it. However there was a silence around what was going to happen next. I found out that all the submissions that were filed – presumably including the template submissions filed via The Free Speech Union and Voices for Freedom – would be made available but anonymised on 31 May 2024
The absence of future directions became clear on Friday 10 May when it was revealed that a spokesperson for the DIA said that the Department was not continuing with the project. There was no indication from the department that any of the proposed work programme would continue in other guises.
Tom Pullar-Strecker, writing in the Post for 10 May 2024 reports:
“The Department of Internal Affairs will not be progressing with the Safer Online Services and Media Platforms programme,” its spokesperson subsequently told The Post
“Content regulatory reform of the scale proposed by the Safer Online Services and Media Platforms work is not a ministerial priority for the Minister of Internal Affairs, Brooke van Velden,” he said.”
And thus, with something of a whimper, the Safer Online Services project and the broad regulatory model proposed has come to an end.
There were mixed feelings about the outcome. The Free Speech Union was pleased with the result. In an email Jonathan Ayling, CEO of the FSU said
“Just last week, we saw the summary report on the discussion paper, 'Safer Online Services and Media Platforms'; over 93% of submissions were from FSU supporters opposing this proposal.
These were ambiguous, subjective proposals, like 'hate' speech laws always are. They wouldn't have reduced the harm they intended to, and would have been abused to simply silence 'unpopular' speech.
Kiwis around the country shared our concerns, and together, we spoke up. And that made all the difference. Today, we wrote to Minister Brooke Van Veldon, thanking her for listening to your response, and abandoning these Orwellian proposals.”
On the other hand InternetNZ was disappointed with the outcome. Mr. Pullar-Strecker reports comments by InternetNZ CEO Vivien Maidaborn
““Stating it is not a priority means it is unlikely there would be anything that would replace or advance this area of work, which is also deeply disappointing,” she said.”
But this is not an end to the matter.
There are a number of initiatives that could still impact upon the digital platforms. The Fair Digital News Bargaining Bill has yet to be reported back from the Select Committee. There are calls for a total review of the regulatory instruments dealing with news media and it is certain that these proposals will go beyond recognised Mainstream Media platforms and extend to other online services.
An example of this can be seen in a recent paper by Dr. Gavin Ellis who is an honorary research fellow with Koi Tu: The Centre for Informed Futures. He is a well known media commentator and expert. A former editor-in-chief of the New Zealand Herald, he lectured on media and communications at the University of Auckland for a decade. He was made an Officer of the New Zealand Order of Merit in 2015 for services to journalism and is the recipient of the Commonwealth Astor Award for Press Freedom. He has written a paper entitled “If Not Journalists, Then Who?” which was released on 1 May 2024.
The paper paints a picture of an industry facing existential threats and held back by institutional underpinnings that are beyond the point where they are merely outdated. It suggests sweeping changes to deal with the wide impacts of digital transformation and alarmingly low levels of trust in news.
My first impression – and it is a first impression only of the paper – is that it may contain elements of the Safer Online Services proposals. I am currently analysing the paper and will write on it in the near future.
Curiously in all of this the demise of the Safer Online Services program has been reported in the Post, the Press and the Waikato Times. It doesn’t rate a mention by that doyen of media, Mr. Shayne Currie of the NZ Herald or anyone else from that august publication (as at midday 11 May 2024)
But for now, State interference with the content on online platforms is in abeyance – for now.
I am trying to get my head around censorship v the supporting or enabling of mis/dis information.
Do you think they must be connected?
Two of my 'old lady friends' were exclaiming the other day that it was so good that the video of the stabbing of a 'Muslim priest' in Australia was taken down by the Aussie news services because no one wants to see such violence etc, and thus censorship is good. They had been listening to NZ MSM reporting about a video that was not shown to them because apparently it was too awful.
So I asked are you sure it was a Muslim being stabbed, and not a Christian priest fronting a service in a church? Absolutely adamant it was a Muslim. 'Those poor people'.
I had actually seen that video on X, and I believe the priest himself was quite OK with the video of the attack being published. It seems to be removed because Muslims were attacking Christians and that is not a good look!
I didn't see that MSM report my friends got their 'facts' from, but I can probably guess their thinking has been biased over some time. I suspect they will be supportive of the Christchurch Call because the MSM are. The FSU (I am a member) may have won a battle, but have not yet won the war.