Introduction
Is freedom of expression under threat? It would seem so but for the most benign of reasons – the protection of the public. The investigations into the shape of legislation about “hate speech” has been stopped although “stalled” would be a better term because, like a zombie, these investigations may have new life breathed into them by different government. But that said, and inexplicably, the Law Commission work on “hate crimes” will continue but only when resources are available.
Linda McIver, the Law Commission’s general counsel said:
“In relation to the hate crime aspects of the project, the minister has requested the Commission considers a narrower review on whether the law should be changed to create standalone hate crime offences as recommended…. The commission will commence that review once resources are available.”
It is not clear how the approach to “hate crime” might take place. When an offence has been proven, the Judge is required required to take into account the aggravating factor
“…that the offender committed the offence partly or wholly because of hostility towards a group of persons who have an enduring common characteristic such as race, colour, nationality, religion, gender identity, sexual orientation, age, or disability and’
(i) the hostility is because of the common characteristic; and
(ii) the offender believed that the victim has that characteristic” (Section 9(1)(h) Sentencing Act 2002)
Hate crime provisions have, controversially, come into force in Scotland. The Hate Crime and Public Order (Scotland) Act introduces new offences for threatening or abusive behaviour which is intended to stir up hatred based on prejudice towards characteristics including age, disability, religion, sexual orientation, transgender identity and variations in sex characteristics.
These extra provisions will add to the long-standing stirring up racial hatred offences, which have been in place UK-wide since 1986.
Protections for freedom of expression are built into the legislation passed by Parliament and these new offences have a higher threshold for criminality than the long-standing offence of stirring up racial hatred, which has been in place since 1986.
Of course, the threatening or abusive behaviour which is intended to stir up hatred will include speech or comment, especially comment online. So despite the change in the name of what is being investigated the issue of a possible clog on the freedom of expression remains.
We already have a number of restrictions on freedom of expression. The censorship regime under the Films Videos and Publications Classification Act provides an example. Complaints can been made about broadcast material to the Broadcasting Standards Authority and about news media content to the New Zealand Media Council.
But the legislative framework surrounding is claimed to be no longer fit for purpose. The main pieces of legislation are over thirty years old. Their core features are still relevant - codes of practice, protecting children from age-inappropriate content and censoring the most abhorrent words and images – so one wonders, even if the legislation is old, any sort of review is needed if it still works.
The answer is that the information eco-system has changed since the 1980’s and 1990’s when most of this legislation was enacted. It is claimed that the legislation doesn’t cover the wide range of harms people are experiencing across online services and media platforms. The system can’t keep up with new technologies and we’ve been relying on slow, reactive interventions, which only take effect after people have already been harmed.
In support of its process to consider a new approach to media regulation, the Department of Internal Affairs began a consultation process about Safe Online Services and Media Platforms. That consultation process closed on 31 July 2023. The Department considered that it was time to reset the system.
The proposals were directed to changing the way that online services and media platforms are regulated, with the major change being the way that social media platforms are regulated.
The plan was that later in 2023 a submission summary report would be made available and an analysis of submissions would feed into high-level policy proposals for the Government to consider. This will lead to the development of more detailed proposals.
None of this has happened as yet but the Safer Online Services analysis of submissions was released on 30 April 2024. That document can be found here.
The former Minister of Broadcasting, Communications and Digital Media, Melissa Lee was the subject of scrutiny from the news media during the difficulties faced by Newshub and TVNZ and the restructuring of television news. Although her attitude appeared to be evasive, it is to her credit that she did not succumb to the suggestions from some quarters that there should be a Government bailout of media organisations, and lo, as is right and proper, the MSM companies worked out their own solution. And also, to Ms Lee’s credit she has resisted suggestions that she should fast track another “bail out” solution – the Fair Digital News Bargaining Bill which I have written about in detail here.
On 9 April Ms Lee noted that there was no easy solution to the range of challenges facing the media industry. She said:
“I am working towards a solution…I know that it is very slow. If only I was a magician, if I could actually just snap up a solution, that would be fantastic.
But I’m not a magician and I’m trying to find a solution to modernise the industry... there is a process happening.”
Ms Lee was referring to a long-awaited Cabinet paper that has no set timeframe for a release, or when decisions might be made.
One of the issues that Ms Lee has to consider is the regulatory landscape for media – especially social media – in the Digital Paradigm.
And inevitably that gives rise to the question – should the Government be further involved in media regulation. The Department of Internal Affairs would argue that it should. They argue that their objective is to design a framework for safer online and media “experiences” across all types of platforms. It is suggested that this will:
provide better consumer protection for all New Zealanders and their communities by setting safety-based outcomes and expectations for platforms
better protect children, young people, and other vulnerable New Zealanders
reduce risk and improve safety without detracting from essential rights like freedom of expression and freedom of the press; and
promote a safe and inclusive content environment while remaining consistent with the principles of a free, open, and secure internet.
I have written at some length about the Safer Online Services proposals here, here ,here ,here, here and here.
The Department of Internal Affairs proceeds on the basis that the current regulatory framework is no longer fit for purpose and that the Government should bring it up to date.
But that approach is premised upon the assumption that the Government has a role in regulating the online interactions of citizens.
This article considers whether or not that assumption is soundly based.
The Landscape
For many years the major newspapers, television networks, and radio stations were the principal gatekeepers and moderators of our national dialogue. They set the tone. They directed the debate. We got all the news that they deemed fit to print or broadcast. They determined what and how many contrary opinions or points of view might be published. Often, in the interests of balance, they got this right. Frequently both points of view might be put forward. But opportunities to reach a wide audience were rare and expensive.
A paradigm shift has taken place with the introduction of the Internet and the various platforms that it makes available. The opportunities to reach a wide audience are now available to all who have an Internet connection. The democratisation of information exchange has never been greater.
Today anyone can effectively be a self-publisher with broad access to the public through multiple social media platforms. But when any one can publish, without having to satisfy an editor or curator that what they say is factual, newsworthy, or ethical, the public conversation is at risk of being overrun by chaos, appeals to the lowest common denominator, expressions of bigotry and hatred, and false speech—both unintentional “misinformation” and intentional “disinformation.”
And it is here that social media comes into the picture.
Social Media
Social media refers to a variety of technologies that facilitate the sharing of ideas and information among their users. From Facebook and Instagram to X (formerly Twitter) and YouTube, more than 4.7 billion people use social media, equal to roughly 60% of the world's population.
In early 2023, 94.8% of users accessed chat and messaging apps and websites, followed closely by social platforms, with 94.6% of users.
Social media started out as a way for people to interact with friends and family but soon expanded to serve many different purposes. In 2004, MySpace was the first network to reach 1 million monthly active users.
Social media participation exploded in the years that followed with the entry of Facebook and Twitter (now X). Businesses gravitated toward these platforms in order to reach an audience instantly on a global scale.
According to Global Web Index, 46% of internet users worldwide get their news through social media. That compares to 40% of users who view news on news websites. Gen Z and Millennials were most likely to view news on social sites versus other generations.
There is no doubt much to criticize about social media. But there is also an unfortunate tendency to see it as the start and finish of all evil.
Many of the criticisms leveled at social media are not unique to it alone. It is said to promote information bubbles, in which people are rarely exposed to views that challenge their presuppositions and biases. ‘
But the same is true of TV outlets, as well as much talk radio and many print outlets as well.
Some of the issues and criticisms of social media are as follows:
Hyperpolarization: Social media platforms have been criticized for contributing to the polarization of society by creating information bubbles where people are rarely exposed to views that challenge their own beliefs and biases.
Spread of “misinformation” and “disinformation”: Social media platforms have been accused of amplifying false and misleading information, whether it is spread by Russian agents, domestic actors, or unwitting individuals. The lack of editorial gatekeeping and the sheer volume of content make it challenging to effectively combat misinformation.
Extremism and “hate speech”: Social media platforms have been criticized for allowing the spread of extremist ideologies and hate speech. The ease of publishing without editorial oversight has enabled the dissemination of bigoted and hateful content.
Mental health concerns: Social media has been associated with negative impacts on mental health, including increased rates of depression and anxiety. The constant exposure to curated and often idealized versions of others' lives can contribute to feelings of inadequacy and low self-esteem.
Influence on elections and democracy: Social media platforms have been implicated in cases of foreign interference in elections, such as Russian interference in the 2016 US presidential campaign. The ability to manipulate algorithms and target specific audiences raises concerns about the integrity of democratic processes.
It is important to note that these criticisms are not exclusive to social media platforms and can also be found in other forms of media. However, the unique characteristics of social media, such as its reach, speed, and lack of editorial oversight, have amplified these issues. Thus social media becomes the loudest voice in the room.
But does a perceived disproportionate emphasis that social media brings to the conversation justify it as a target for regulation?
Those in favour of regulation would point to both the above and the following characteristics of social media that might justify government intervention:
1. The information bubbles and hyperpolarization where users are exposed primarily to content that aligns with their existing beliefs and biases leads to echo chambers and a lack of exposure to diverse perspectives.
2. Social media algorithms are criticized for amplifying extreme views because they generate more online engagement and profits. This can contribute to the spread of divisive and polarizing content.
3. Social media platforms are seen as facilitating the spread of false and misleading information which may be intentional disinformation campaigns by foreign or domestic actors, as well as unintentional sharing of misinformation by users.
4. Unlike traditional media outlets, social media platforms have minimal editorial oversight although as will be discussed there is a level of content moderation. This can lead to the dissemination of content that is inaccurate, offensive, or harmful.
Content Moderation
Content moderation is necessary to prevent platforms from being overrun by offensive, irrelevant, or harmful material. Platforms have the responsibility to strike a balance between allowing free access to speech and curating content to keep their sites useful.
The reality is that no social media platform is literally open to all messages; they all engage in some content moderation, prohibiting certain messages, favoring others, and deemphasizing still others.
If content were not moderated at all, the platforms would be useless and users’ “feeds” would be filled not with material that might interest them but with whatever was most recently or most frequently posted.
Spam, irrelevant garbage, pornography, and hate speech would become regular features of users’ favorite platforms.
There is no doubt that content moderation policies could certainly be improved, but it is far from clear how to achieve that.
Empowering the State to impose the rules is a treatment likely worse than the disease.
Emily Bazelon is an American journalist. She is a staff writer for The New York Times Magazine, a senior research fellow at Yale Law School, and co-host of the Slate podcast Political Gabfest. She is a former senior editor of Slate. She comments as follows:
“When it comes to the regulation of speech, we are uncomfortable with government doing it; we are uncomfortable with social media or media titans doing it. But we are also uncomfortable with nobody doing it at all.”
Facebook moderates content through a combination of automated systems and human review. Here is an overview of their content moderation process:
1. Reporting: Users can report content they believe violates Facebook's Community Standards, which cover a wide range of issues such as hate speech, violence, nudity, and misinformation.
2. Automated Systems: Facebook employs artificial intelligence (AI) algorithms to detect and remove violating content. These systems use pattern recognition and machine learning to identify potentially problematic content based on predefined rules and guidelines.
3. Human Review: Certain types of content, especially those that are more nuanced or context-dependent, are reviewed by human content moderators. These moderators assess reported content and make decisions based on Facebook's policies and guidelines.
4. Community Standards Enforcement: Facebook's Community Operations team is responsible for enforcing the platform's Community Standards. They review reported content, take action on violating posts, and apply penalties such as removing content, issuing warnings, or disabling accounts.
5. Appeals Process: If a user disagrees with a content moderation decision, they can appeal to Facebook for a review. Appeals are typically reviewed by a different team of moderators to ensure impartiality.
6. Partnerships and External Input: Facebook collaborates with external organizations, fact-checkers, and experts to improve content moderation practices. They also encourage users to provide feedback and suggestions to enhance their policies and enforcement mechanisms.
It is important to note that content moderation on Facebook is an ongoing challenge due to the sheer volume of user-generated content. The platform continuously refines its algorithms and policies to address emerging issues and adapt to changing user behavior.
We have seen that it really is not feasible to operate a social media platform without content moderation but the decisions about such moderation should be in private hands rather than the suffocating paws of the State.
If there were to be a State-run system, there would have to be a form of viewpoint neutrality.
It would mean, for example, that a platform that allowed posts encouraging suicide awareness would also have to allow posts encouraging suicide.
It would mean that if a platform published antiracist posts or messages condemning antisemitism, it would also have to publish racist taunts and “Genocide to the Jews.”
It would bar platforms from taking down hate speech, for that is by definition a form of viewpoint discrimination.
And it’s far from clear how one could possibly implement viewpoint neutrality across billions of posts daily.
Government imposed content moderation is not the answer.
Rather there should be some principles – not imposed by the State - that can be agreed upon to guide the content moderation decisions of platforms, because we have agreed that some content moderation is necessary for platform viability.
Like newspapers and bookstores, social media platforms can refuse to publish or distribute content simply because they find it offensive, distasteful, false, or unworthy for virtually any reason.
The government, by contrast, cannot regulate speech on those grounds.
Private platforms need not be content- or viewpoint-neutral; indeed, they cannot function without constantly making such judgments. Nor are they bound to publish all speech that is protected by the Freedom of Expression. Platforms routinely bar nudity, pornography, hate speech, and support of terrorism and other violence.
Let us look at some of the issues surrounding Government or State imposed content moderation.
Government Imposed Content Moderation – Problems
1. Bill of Rights Act Protection: Section 14 New Zealand Bill of Rights Act protects freedom of speech and limits government censorship or regulation of speech. The protection is not as absolute as that available under the First Amendment to the US Constitution. But prima facie the protection is there.
Social media platforms, as private entities, are also protected by the by NZBORA, allowing them to make editorial judgments and decisions about the content they host. Imposing government control over content moderation would potentially violate the platforms' rights to free speech and free press.
2. Practical challenges: Social media platforms host an enormous volume of content, with billions of posts being generated daily. Implementing government-mandated content moderation across such a vast scale would be logistically challenging and potentially result in over-censorship or under-censorship. It would be difficult to ensure consistent and fair application of moderation rules across all platforms and posts.
3. Viewpoint neutrality: As already noted Government-imposed content moderation based on viewpoint neutrality would require platforms to allow all types of speech, including hate speech, misinformation, and harmful content. This would undermine efforts to combat harmful and offensive material and could lead to the amplification of harmful ideologies. It would also conflict with the platforms' own content moderation policies and their responsibility to create safe and inclusive online environments.
4. Potential for abuse: Granting the government the power to regulate content moderation on social media platforms raises concerns about potential abuse of that power. It could lead to political interference, censorship, and suppression of dissenting voices. Government control over speech on social media platforms could undermine democratic principles and limit freedom of expression.
5. Private sector responsibility: While government regulation may not be the solution, there is a recognition that social media platforms have a responsibility to address the problems associated with their platforms. Platforms should take proactive steps to improve content moderation policies and promote responsible use of their platforms. This can include developing professional norms, implementing transparency measures, and fostering competition to address the concentration of power.
Conclusion
It is important to find a balance between protecting freedom of speech and addressing the negative impacts of social media. While government intervention may not be the answer, there is a need for ongoing dialogue and collaboration between platforms, users, civil society, and policymakers to find effective solutions to the problems associated with social media. It would be preferable to leave it up to the market and the platforms to develop a voluntary regulatory environment as has been the case with the Aotearoa New Zealand Code of Practice for Online Safety.
That seems to be a more than satisfactory solution.
And as for the Department of Internal Affairs Safer Online Services Project? The DIA website indicates that the content regulatory review will conclude in May 2024. After that - who knows.
As far as I am concerned the idea of trusting the government (as in the collection of district, regional or national politicians and public servants) to responsibly and honestly “govern” the internet is laughable. That bridge was weak before the COVID response placed a bomb under it and sent it sky high for a very significant proportion of the population.
The only way trust will be regained is by government following a significantly greater hands off (aka laisez faire) path for a decade or so. That too has its shortcomings but if it reverses society’s increasing distrust and disdain for those that govern us (by our choice and permission don’t forget) it is probably wiser than the alternative x an ever more meddling intrusive government leading to ever increasing discontent and eventually rebellion.