Safer Online Services and Media Platforms
The Safer Online Services and Media Platforms (SOSMP) review, formerly known as the Content Regulatory Review, was an initiative led by the Department with support from the Ministry for Culture and Heritage. The project started in June 2021 and finished in May 2024.
The scope of the proposals was extensive and breathtaking. The aim of the project was to develop a new framework to regulate what can be published on online platforms and other forms of media (such as news) in New Zealand.
It addressed the sharing of harmful online content such as child sexual exploitation, age-inappropriate material, bullying and harassment, the promotion of self-harm, and so on. It also aimed to generally improve the regulation of online services and media platforms.
I do not intend to discuss SOSMP in detail. I have done that in other posts. In summary what was proposed were codes of practice governed by an independent regulator to control online harm and protect public safety. The safety standards would have applied to online and other media platforms.
The key elements were as follows
1. Parliament would pass legislation setting expectations for the safety-based outcomes platforms must achieve, as well as the mandate and scope of the new independent regulator. The new legislative framework would repeal the Classification Act, but it would carry over existing provisions on illegal “objectionable” material. A code-based regime would replace the current classification regime for legal content. Thus censorship would fall within the scope of the proposals.
2. Codes of practice would cover processes for platforms to remove content and reduce the distribution of unsafe content, accessible processes for consumer complaints for particular content, support for consumers to make informed choices about potentially harmful content, how platforms will report on these measures, including on transparency, and how they are reducing the impact of harm from content, and their performance against codes.
3. A new independent regulator would be responsible for approving the codes of practice, overseeing compliance with the codes, and education outreach. The DIA has indicated the regulator would be focused on areas with the highest risk (e.g. harm to children, the promotion of terrorism or violent extremism) and would not have any powers over editorial decision-making or individual users who share content. The government would only intervene with individual pieces of content if they are, or could be, illegal – a power that already exists. The DIA has emphasised that the regulator will not have the power to require platforms to takedown content that is not illegal.
4. The scope of regulated entities will extend beyond traditional media services like TV and radio broadcasters to include digital media platforms, social media platforms, and other online service providers. For example, there could be rules requiring the responsible and transparent design of “ranking algorithms” such as social media newsfeeds, metrics for reporting on harm, and limits on the ability of users who post harmful content to reach wide audiences.
Most of the obligations in the proposed framework would apply to “regulated platforms” whose primary purpose is to make content available. Current indications are that the platform or service is likely to have either an expected audience of 100,000 or more annually or 25,000 account holders annually in New Zealand. Alternatively, the regulator may designate a platform as a regulated platform if it is unclear whether the threshold has been met, or the risk of harm from that platform is significant.
5. The DIA was not proposing to change the definitions of what is currently considered illegal in New Zealand. The new regime would retain powers of censorship for the most extreme kinds of content (called ‘objectionable’ material, defined in section 3 of the Classifications Act). The new regulator would have powers to require illegal material to be taken down quickly from public availability. Criminal and civil penalties would still apply and prosecutions could continue to be undertaken by government agencies.
The DIA was proposing that the regulator should also have powers to deal with material that is illegal for other reasons, for example, harassment or threats to kill and is seeking feedback on what other kinds of illegal material the regulator should have powers to deal with. The proposed enforcement powers include directing platforms to take remedial action, issuing formal warnings, seeking civil penalties for significant regulatory non-compliance, and requiring platforms to take down illegal material quickly when directed and be liable for not meeting specified timeframes.
The work on the project was stopped by the Coalition Government. Internal Affairs Minister Brooke van Velden argued that illegal content was already being policed, and the concepts of “harm” and “emotional wellbeing” were subjective and open to interpretation. She also said it was a matter of free speech.
“The principle of free speech is important to this coalition government and is an essential factor to consider in the digital world. On this basis, the Department will not be progressing with work to regulate online content.”
The Narrative for the Need for the SOSAWP Proposals
The DIA programme had been in place for some time. It initially started as a Content Review programme and stuttered along until the Discussion Document was released by the DIA and public “consultation” was called for.
The DIA programme was premised on issues of harmful content and a need to provide greater regulation of Internet Platforms. Although the technology has progressed by leaps and bounds since 1995 the messaging that formed the arguments for the Technology and Crimes Reform Bill has continued.
The Discussion Document dated May 2023 summarised what was seen as the “problem”
“Everyone consumes or uses content, like books, films, and radio to social media, blogs, and everything in between. However, our rapidly evolving and growing environment means that New Zealand’s existing regulatory systems for content are no longer as responsive or effective as we would like them to be. Because of this, New Zealanders are being exposed to harmful content and its wider impacts more than ever before”
When we look closely at this statement it is clear that there is an automatic assumption that regulatory systems for content are necessary. There is neither discussion nor justification for the regulation of content.
The biggest interference with New Zealander’s access to content lies in the Films, Videos and Publications Classification Act. That Act bans objectionable content that is strictly defined and I have discussed that system in an earlier article in this series. I have no difficulty with restrictions on access to objectionable content.
But the Safer Online Services proposals were designed to introduce a significantly greater interference with content than that administered by the Censor.
The DIA described these as “robust consumer protection measures” which is a very diluted euphemism for censorship. It is perhaps a classic piece of official messaging which attempts to case an authoritarian proposal in the most favourable and friendliest light.
The need for urgent action is emphasised.
“If we do nothing, New Zealand is at risk of falling behind the protections that other like-minded nations are providing. The proposals in this paper are aligned to the changes being made in other countries to better protect their citizens and their human rights.”
Once again there is an assumption without justification for regulation together with a suggestion that New Zealand would fall behind other countries. But as for the protection of citizens and their human rights that is an outright misrepresentation. The SOSAWP proposals were for a censorship regime that would have restricted Nedw Zealander’s freedom to express and access information – the freedom of expression guaranteed under the New Zealand Bill of Rights Act.
As is the case in most messaging of this nature – especially from the State – a nod is given to the freedom of expression. But there rarely any justification given that the proposal is a necessary reasonable limitation of the freedom of expression in accordance with the New Zealand Bill of Rights Act 1990. It seems to me as we progress through the third decade of the Twenty-first Century that the New Zealand Bill of Rights Act is an inconvenience and an obstruction to the continued growth of the State and its interference in the lives of its citizens.
An example may be seen in the Social Media Age Restricted Users Bill (SMARUB) which significantly interferes with the rights of parents to set standards for their children and allows the State to assume responsibility for the social media platforms that under 16’s would be allowed to access.
One of the narrative themes that accompanies proposals like SOSAWP and the SMARUB is that of fear.
An example appears in the SOSAWP Discussion Document.
“In a June 2022 research report from the Classification Office, 83% of respondents reported being concerned about harmful or inappropriate content on social media, video-sharing sites, or other websites”
The Document also considers
“widespread concerns about the harm some content is causing children and young people. Many of these concerns were about social media and other online platforms, but we also heard concerns about other types of platforms such as broadcasters. This risky content includes age-inappropriate material, bullying and harassment, and promotion of self-harming behaviours.
Instances of harmful content on mainstream social media sites, such as influencers promoting dangerous disordered eating to teenage girls, have become too common.
Internet NZ’s 2022 Internet Insights report also found that respondents were most concerned about the internet enabling young children to access inappropriate content”
Thus a climate of fear is generated to provide a basis for public acceptance of restrictions on their individual liberties. And the trouble is that it works, partly because of a lack of critical analysis by the majority of the public but also because of growing lack of resilience to confronting uncomfortable content – almost an unwillingness to turn away or operate the Off switch. Rather it is easier to ban this stuff and then we don’t have to deal with it.
The narrative of fear of harm continues. The “justification” continues:
There have been well-documented cases where young people have been seriously harmed by distressing material that has been actively recommended to them by platforms. (In this discussion document we use the word ‘platforms’ to refer to providers of content and services – for example, social media companies or broadcasters.)
Two things are obvious from this comment. The first is that the scope of the SOSAMP proposal is wider than Internet platforms and includes broadcasters. Thus the entire ecosystem of communicated information is suddenly subsumed into the proposal.
The second point is that the allegations of harm – and these come from a number of sources including the B416 group that supports the SMARU Bill – is incorrect and in fact amounts to something of a selective generalisation.
Kerry Gibson, Sarah Hetrick and Terryann Clark suggest that research about the negative effects of social media is being used to scare parents. Their article “Social Media not the bogeyman of all kids’ ills” states:
Although it is correct that there are rising rates of mental health problems, researchers have recognised a range of social trends that might account for this, including broad shifts in family structure and living circumstances, increased academic pressure, housing and economic insecurity, inequity and discrimination, political polarisation, and climate change.
Social media is likely to be one of many factors that have an impact on the lives of young people – but is certainly not the sole contributor. A meme doing the rounds in young people’s social media networks captured the absurdity of adults blaming social media as the cause of young people’s distress: ‘Climate change is destroying the earth, we can’t get jobs, and we won’t own houses – but parents be like: Social media is making our kids anxious.’
Yet the “harm” meme is relentlessly pushed to justify some form of regulation of Internet communications systems and platforms. Indeed the very name of the DIA proposal “Safter Online System” carries with it the implication that the online space is unsafe and therefore requires regulation in some supposed “public interest”.
The argument then moves to the inadequacy of existing systems. In a shift away from harm the focus moves to consumer protection and in particular consumer safety protection. The proposal states:
Consumer safety protections on media and online platforms are not as strong as they are for many other services that New Zealanders use, and they are not consistent across all platforms. Most platforms set standards for the content they will carry, but the standards do not always reflect the expectations of the society they are operating in. These standards are also not always met. It can be very hard to resolve a complaint when a platform does not deliver on its commitments to its users.
What becomes clear from this messaging is that the Internet environment is unsafe, that it is causing harm and that the platforms are not doing their job. Not once – not even once – is the issue of individual action or individual responsibility raised as an issue.
That inconvenient aspect of the matter is swept to one side as a paternalistic state moves in to assume responsibility for the perceived risks that may occur from Internet use.
The proposal acknowledges that there are powers available to deal with the worst content.
Our current system has legal powers to deal with the most awful and illegal content like child sexual exploitation and promotion of terrorism, regardless of whether it is delivered online or through traditional forms of media such as printed publications. But sometimes content that includes other illegal actions (such as threatening to injure) can be taken less seriously or even amplified online
Note the use of emotive language – “the most awful and illegal content” when “the worst content” or “the most objectionable content” would have done.
The argument then moves on to the perceived inadequacy of existing legislation. What is the first argument advanced – the legislation is old – is a real classic. There ius a concession that many parts of those laws are relevant but they have neither the reach nor the tools to deal with the online world. It is too complex when people want to complain – and believe me, we are a nation of complainers; off to some official body to winge about some upset or offence that has occurred.
The proposal states:
New Zealanders must figure out which of five industry complaint bodies to go to if they feel content is unsafe or breaches the conditions of the platform it is on. On top of that, not all forms of content are covered by those bodies. The system is also very reactive because it relies mainly on complaints about individual pieces of content. For most forms of content, we do not have the tools and powers to ensure that platforms are doing what they should to manage the risks of harmful content.
It is interesting to note that there is an implied criticism of a system that is reactive – that is, it only engages when a complaint is made. The other option – unstated of course – is that the system should be proactive and “protect” citizens from their folly by stemming the tide of content that the State decrees by some Code of Conduct to be harmful.
The justification process concludes:
It is important that our laws reflect our digitalised environment, including clear avenues where consumers can influence the content they see and respond to content they feel is harmful. While the development of this legislation rests with government, the implementation and practice sit with platforms. These safety practices need clear oversight to ensure effective and appropriate implementation.
Thus enhancement of a complaints process plus a responsibility on the platforms to implement government policy.
But behind this is the spectre of the State as a regulator of content. And there is a word for that – censorship. And the platforms must become complicit in this State driven content control. If they do not comply then sanctions will be imposed upon the platforms.
That therefore is an overview and analysis of the messaging from the DIA in support of their regulatory proposals.
Submissions were received by the DIA from a large number of bodies. The Free Speech Union unsurprisingly were opposed to the proposals and mounted a campaign, providing assistance to commentators with a template. The summary of submissions notes this and the implicit suggestion is that those submissions, which reflect the concern of citizens who, after all pay the DIA wages by their taxes may not be taken as seriously as those that were “handcrafted” by commentators.
But the fact of the matter is, irrespective of how the submission is put together, citizens were prepared to raise their voices – put their heads above the parapet – against the proposal. That their weight should be diminished in such a way is an insult by a group of arrogant bureaucrats who clearly did not like what they saw.
Augmenting the Messaging
In addition that messaging was augmented by branches of mainstream media and also by InternetNZ. Its submission was largely in support of the proposals and can be found here thus amplifying and adding to the control of the narrative.
Subsequent articles published by InternetNZ CEO Ms. Vivian Maidaborn continue to lament the abandonment of the project and call for its reinstitution. This is a further example of the way in which an NGO can support the messaging around a project that, for the moment, is going nowhere.
That said, given the support of people like Ms. Maidaborn and InternetNZ, given the fact that SOSMP was a Labour Party Initiative and given the acid criticisms of Mr. David Parker on Q & A on Sunday 6 April 2024 if a Labour Government is re-elected it is likely that the SOSMP proposals will be revived notwithstanding Mr. Parker’s departure for greener pastures.
David Parker Enhances the Narrative
Mr Parker claimed on the Q & A interview that what social media companies allow on their platforms is “ruining civilisation" and "ruining our democracies.” He admitted they were strong words but he believes them to be true. (As an aside matters of belief are generally matters of faith and ipso facto lack evidential support. Mr. Parker could have been more precise in his language)
He went on to give some examples which were attributable to social media platforms
“Youth suicide, mental health problems with young people, people getting ripped off all of their savings from scams, and those companies are doing nothing to prevent it – in fact they’re selling services to the people doing it,”
Mr Parker claimed that the problem could be addressed by removing the exclusion of liability for social media companies, and leave it to the courts to sort of the balance between freedom of expression and the duty to not sell a harmful product. He then, having said that a form of censorship was to be imposed butr would be monitored by the Courts claimed that “I don’t like the idea of government being a censor, it should be up to those social media companies to prevent the harm they’re causing.”
He then revealed his true Labour/Socialist politics of envy colours by suggesting that the world was becoming increasingly sick of “these selfish tech billionaires who are ruining things – how much money do they need?”
“Those unfair settings, they can’t last. The wild west of social media has to end. And I really do think they’re megalomaniacal tax avoiding tech billionaires.”
So Parker was using his session on Q & A – a few minutes towards the end of a discussion on Pacific security and tariffs – to ramp up the narrative on the evils of the Internet and that something must be done.
Although he did not explicitly say so, it is clear that the subtext of his remarks was the cry that is so familiar in New Zealand – The Government must do something. And that something is to regulate.
Conclusion
The actions of the B416 Group supporting the SMARUP make it clear that there is a continuing appetite for some form of internet regulation. As I write this the SMARUP proposals clearly require further work and the Private Members Bill will be more closely scrutinised.
I would be prepared to suggest that the further work may be result in a more extensive proposal – one that is broader than a prescribed restriction on <16’s accessing certain social media platform.
It may well be that a somewhat diluted form of the SOSAMP proposals will emerge – zombie-like – and it will be interesting to examine and parse the messaging that accompanies those proposals.
And it is guaranteed that if the Left recovers the Treasury Benches they will move swiftly to implement some form of control – probably in the form of the SOSAMP proposals – over the way that we communicate online.
Oh yes, they'll get the camel's nose in the door.
More about censoring "harmful" political ideas and while passing on smutty transexual and queer pride events.
Next they'll ask for public comments and ignore any adverse ones.
If the want to save lives, focus on the increasing child poverty in this country.
"They're" not going to give up, are they?