Introduction
A new report – “Content That Crosses the Line: Conversations with young people about extremely harmful content online” - released by the Classification Office shows that seeing extreme – and sometimes illegal or banned – content is part of the online experience for young people.
The Classification Office is a regulator with responsibility for classifying content that may need to be restricted or banned under the Films, Videos, and Publications Classification Act 1993.
The Office is also charged with conducting research and educating the public about the classification system.
The Chief Censor, Caroline Flora, said the purpose of the Report was to understand what extreme content young people are seeing online, where they’re seeing it, how it affects them, and what support they feel is or is not available.
The report reveals that extreme content, including content that has been classified as objectionable (banned) in New Zealand, can be difficult to avoid. In addition curiosity is a key driver for engaging with it. Real-world graphic violence was the most common type of content that respondents to questions mentioned.
Harmful content was easier to contain and control when the Classification Office was established 30 years ago.
The Report makes clear that when the Office restricted or banned content, it largely stayed out of public view. Today, content that once remained in the dark corners of the web is now surfacing in everyday online spaces – through messaging apps, social media algorithms, and even search engine results.
There can be little argument that the growing accessibility of illegal content online – including violent extremism and child sexual abuse material – demands prompt, responsive, and consistent classification services and support for the justice system.
This forensic classification work informs criminal investigations and requires in-depth analysis and expertise, and the Office continues to be actively involved in efforts to limit the availability of extremely harmful material, both in New Zealand and globally.
In recent years, parents, teachers, youth workers - and young people themselves – have consistently raised concerns about the impact of harmful content. The Report states that it affects safety and mental wellbeing and contributes to real-world harm.
Beyond emotional and psychological effects, this content can influence attitudes, beliefs, and behaviour, with serious impacts on society as a whole. There are also legal risks for those who create, share, or even possess material that crosses legal boundaries.
The Report states that the rise in harmful online content calls for a stronger focus on prevention and support. To respond to this challenge, the Censor’s Office takes a public health informed, multi-layered prevention approach.
The Purpose of the Report
The Report clearly states its purpose which is to ensure young people, and the adults who support them, have the knowledge and skills to recognise and respond to extremely harmful online content.
The Report points out that this involves:
“• Prevention: we proactively engage with young people about the risks associated with harmful online content, including the dangers of radicalisation, criminalisation, and the creation or sharing of content.
• Education: we develop clear and accessible resources to help young people recognise, report, and manage exposure to extremely harmful content. This includes understanding legal boundaries and the broader harms it causes, such as desensitisation, trauma, and the promotion of harmful behaviours.
• Support: our approach involves equipping young people and the adults in their lives (educators and caregivers) with the information and resources they need to support young people who see or engage with extremely harmful content.”
The goal is to help New Zealanders better navigate issues such as understanding legal boundaries and recognising broader harms such as desensitisation, trauma, radicalisation, and the promotion of violence or harmful behaviours.
Extremely Harmful Content.
The Report at Page 6 defines the term and the target of its concerns.
“'Extremely harmful content' is a working definition referring to video, images, text and other material that people see online that could potentially be classified as objectionable (meaning banned or illegal) in New Zealand under the Films, Videos, and Publications Classification Act 1993.”
Thus, extremely harmful content is content that potentially could be objectionable and therefore the subject of classification that could attract legal consequences.
But the Report takes a nuanced approach.
It observes that the consultations that were carried out (and which will be outlined in the next section) centred around young people’s own experiences of what they consider to be extremely harmful, which may not always align with what is technically illegal. Each person perceives harm differently, depending on context, life experience, and cultural background. Thus, a subjective component is present in the evaluation.
That said some of the content discussed clearly fits within the definition of extremely harmful content, including graphic depictions of real-world violence, such as executions, mass shootings, suicide, and extreme cruelty towards animals. These types of content are the primary focus of the Report.
The use of the term “extremely harmful” is problematic in my view and I shall discuss this issue further in my observations on the Report below.
Report Methodology
The Report adopts an anecdotal rather than a statistical approach to the data. There was consultation with 10 groups of young people from across the country, and their voices form the basis of the Report.
The Report states at page 4:
“These conversations provided valuable insights into young people’s experiences and highlighted both their resilience and the serious challenges they face online.
• Young people talked about personally seeing extremely harmful content, including content that has been classified as objectionable (illegal) in New Zealand. This includes examples of graphic real-world violence, including mass shootings, livestreamed suicide, and extreme violence towards animals.
• Exposure to harmful content is often unintentional, appearing in social media feeds, chat groups, or shared directly by others. Even if not actively searching for harmful content, curiosity – or a desire to test their boundaries – can lead young people to engage with it when it unexpectedly appears in their feeds. Participants expressed a lack of confidence in platforms' ability to moderate content effectively.”
The consultations were designed to give the Report writers a deeper insight into the online experiences of the interviewees and to ensure that young people’s voices shape not just the resources created by the Office, but the broader approaches that the Office might take in responding to these challenges.
The consultation process is outlined at page 13 of the Report.
As stated, consultations involved ten groups of young people. While these groups are not representative of young New Zealanders generally, they included a diverse range of participants from communities across New Zealand.
The age of participants ranged from 12 to their early 20s and included young people who identified with a range of ethnic and cultural backgrounds. Participants from various urban and rural locations, socioeconomic backgrounds, and educational environments were included.
The Report describes how the consultation process was carried out:
“Consultations included facilitated group discussions and guided questionnaires. Sessions ranged from one to two hours, facilitated by interviewers familiar with the unique contexts of each group. The approach prioritised open, youth-led discussion rather than structured or adult-imposed definitions.
The consultations focused on young people’s personal experiences and perceptions of what they consider to be extremely harmful content. This meant discussions often included a broad range of content and behaviours, not limited to what is legally defined as objectionable. Some of this content comes under our definition of extremely harmful content, which is the focus of this report.
Views about harmful behaviour are also discussed in this report, with a focus on how this relates to extremely harmful content, including the way in which it is being shared, commented on and promoted.”
The consultation process was jointly led by an external facilitator and a team from the Classification Office. Following the sessions, the facilitator produced a detailed summary of key themes. The Classification Office research team then undertook a second stage of analysis, coding the full transcripts and refining the material into the Report. The final draft was peer-reviewed internally to ensure accuracy and consistency.
The Report is frank about possible shortcomings in this approach.
“The topics explored reflect what participants felt comfortable sharing in a group setting, which may not capture the full range of views, experiences, or levels of exposure. Group dynamics, topic sensitivity, and cultural context all shaped how the conversations unfolded.”
Findings and Revelations
The methodology of obtaining information from those directly affected by the content in question produced some useful and valuable information and insights that do not occur in many reports about harmful content.
One such insight is that the encountering of extremely harmful content is part of the online experience for some young people, and they are often dealing with this challenge without adequate support or guidance.
Participants in the process spoke about the need for supportive and understanding responses when seeking help with difficult content or online experiences.
They emphasised that assumptions or strong emotional reactions from adults can make it harder to speak up and sometimes lead them to avoid reaching out altogether.
Participants stressed the importance of being able to talk without fear of criticism or punishment. They felt that judgement or punitive actions – such as taking away devices – were unhelpful, and more likely to push them away from seeking support.
Young people wanted to feel confident managing these situations on their own terms, with the reassurance that trusted adult support is available if and when they need it.
The observation about taking away devices (or restricting access to devices or to social media) is an important one.
The Report contains a number of comments made by interviewees which have been thematically grouped. On the subject of banning or removing devices one interviewee said:
“Well, my parents said, "We’re taking your phone away for a week." And I’m like, ‘Why?’ … my phone is my social life when I’m not at school.”
Another observed:
“And if that person is a parent, they’ll be like, "Oh, maybe you should take a break from your phone if you’re seeing content like that." That’s not what I want. I just want to step away from it for a bit, think about it, and then come back.”
These comments make clear a reality of life for young people in the digital paradigm. Their social lives involve interaction with others through their devices.
Whether this is by means of social media platforms, messaging applications or the multitude of other methods offered by the communications medium that is the Internet, it is one of the ways in which they live their lives and communicate with their friends and peers.
Anecdotal evidence indicates for example that young people prefer to interact via their devices than watch “appointment TV” and obtain their entertainment per medium devices, watching streaming content and the like. This could well explain audience migration away from mainstream media on the part of younger elements of the population.
To suggest limiting access to devices or a universal ban would be to shut the stable door after the horse has bolted. It would be a discriminatory and counter productive approach that would go the way of most prohibitions in that it would be circumvented – dishonestly and surreptitiously.
The Classification Office is building on a multi-layered prevention strategy. Its objectives are stated as follows:
• Reduce young people's exposure to illegal and extremely harmful online content
• Help prevent young people from being drawn into harmful online spaces or behaviours that could put them or others at risk
• Minimise the creation and sharing of harmful content by young people
• Equip young people with the skills to identify and manage harmful online content responsibly
• Provide education and support for young people, parents, caregivers, and educators
• Foster a safer online environment by empowering young people and their communities to recognise and respond to extremely harmful content.
It is important to note that none of the proposals advanced by the Classification Office suggest the need for law changes, or actions that restrict the use of devices or Internet platforms by young people. No proposals are suggested to monitor or regulate Internet platforms in the manner suggested by the unfortunate Safer Online Services and Web Platforms proposals advanced by the Department of Internal Affairs and thankfully abandoned by the Government.
Rather the proposals are largely educative and proactive in developing ways and means that enable young people to responsibly use the wonders that modern communications technologies present.
Observations
The Report is helpful. It is measured and unlike many papers in this area does not rely on fear or hysteria. Importantly it provides evidence from the voices of the users who are potentially most affected by extreme harmful content.
Which leads me to two observations about the Report. The use of the term “Extreme Harmful Content.”
The reason that there is a problem with this term is that “harmful” has a particular meaning within the context of online harm. The Harmful Digital Communications Act 2015 (HDCA) defines harm as “serious emotional distress”. That is one of the elements that must be established if a civil enforcement order or an offence under the Act is to be proven.
The Report seems to look at harm from the point of view of the content of the material posted rather than the effect. In saying that I recognize that the survey contains a number of reactions or effects to the extreme content that has been viewed.
But with a few exceptions – and it is recognized that these responses are subjective – distress that crosses the serious emotional requirement has not been established. There are psychological and emotional consequences. Some of these are short term effects such as anxiety, shock, disgust, fear and unease. Others have a long-term effect - distress, overthinking, loss of sleep, and potential trauma. Some participants reported that disturbing content "stays with them" and resurfaces in their minds later.
Exposure can lead to depression, stress, or anxiety, and may trigger trauma responses, especially if the content relates to personal experiences or past events and it is at this level that the “serious emotional distress” aspect of harm becomes apparent.
These effects highlight the need for supportive, non-judgmental guidance to help young people navigate the challenges of encountering harmful content online
But the problems with the definition still exists. Because the focus is on the type of content that prompts these responses, my view is that either the term “borderline objectionable content” or “potentially objectionable content” would remove any possible overlap with the legal definition of harm in the HDCA.
From time to time in the Report the word “harmful” is used without a modifier. This again causes confusion in that the focus of the report is on extremely harmful content rather than harmful content.
This underpins my suggestion that different terminology should be used to identify the type of content that the Report targets.
The second shortcoming in the Report is that no mention whatsoever is made of the Harmful Digital Communications Act 2015. This is a core piece of legislation in the online safety space, yet it merits not a mention in the Report. This seems to be a shortcoming in contemporary discussions about “online safety”. The Safer Online Services proposals contained a passing reference to the HDCA. Yet the Approved Agency under the HDCA – Netsafe – has been doing outstanding work in this space for many years. It is curious and questionable as to why the Act and the work done by Netsafe has been overlooked.
Different Approaches
The Report proposes certain remedial measures that can be adopted and offers a number of resources to assist. As I have already mentioned, the Report specifically avoids any form of legislative or State intervention.
In my article of 5 May “Controlling the Narrative” which has been posted on Substack, I discussed the proposals of Rod Cope who favours a legislative solution. I don’t intend to repeat here what I have written but want to highlight a couple of similar approaches which run up against the reasonable and measured response of the Chief Censor’s Report
Cecilia Robinson, a well-known commentator, has suggested that New Zealand should ban the use of smartphones for under 16’s. Already the use of smartphones in schools is forbidden and that has a good rationale – they distract students from classroom engagement. But banning smartphones for <16’s removes a means of communication and socialization that young people have enjoyed for some time.
And of course, as is the case with any ban, it can be circumvented. It can be breached and that in itself encourages devious and dishonest behaviour. A better solution is that proposed by the Report – education, discussion, parental involvement and availability.
Makes Sense is another organization that advocates a State solution. It has a campaign about limiting the availability of smartphones. The campaign page states as follows:
“#HOLD the phone is a NZ-wide campaign, aimed at challenging the culture of giving kids smartphones before high school.
Over the past few years, we have been advocating for systemic change to address children’s access to illegal sexual content online. Through our petition, lobbying, research, and conversations with parents, officials, schools, and tech companies, we’re even more convinced that keeping kids safe requires a society-wide response.
Primary prevention, alongside legislative change, is key—starting with keeping kids off smartphones.
By giving children independent access to the unregulated internet, social media, and gaming platforms, we’re exposing them to real risks. Sadly, the statistics are only getting worse in NZ.
We know some parents consider buying phones over summer in preparation for the new school year; this a crucial time to pause and reflect and to consider delaying a bit longer.
We have this same juggle that you have as parents with kids heading into intermediate and high school. We’re delaying smartphones because we’ve seen the data and we want better for them. In all our workshops, we’ve never heard a parent say, “I wish we’d given our child a phone earlier”.
Parenting with tech is hard, but together we can Hold the Phone and shift the statistics for Kiwi kids. Get on board, share on your socials, and have convos with your mates about delaying phones until high school.”
Although Makes Sense promotes responsible parental involvement, State involvement in parenting is an unnecessary intrusion and, as many of the respondents to the Report state – they want to retain their phones for social purposes. They live a significant proportion of their lives online.
Makes Sense compiled a comparative chart of Online Safety measures in different countries. It has columns for legislation in Australia, the EU, Canada and the UK. It has a column for New Zealand as well. But that column suggests that New Zealand has NO online safety legislation. And of course we have in the HDCA – but the Makes Sense chart seems to ignore that. Is that because it is inconvenient to their proposals that New Zealand DOES have such legislation? Or is it because of ignorance of the online ecosystem. I doubt that it is the latter.
Finally, Makes Sense has the following message on its website under the heading
“Systemic Approach to Child Safety Online”
Drawing on international best practice, we recommend:
A National Strategy for Online Child Safety
A whole-of-government plan to prevent harm, coordinate action, and promote child rights online.A Safer Internet Agency and Online Children’s Commissioner
An independent agency empowered to lead, set standards, and drive action — with strong accountability mechanisms.Clear Policy Mandates
Giving the agency authority to investigate, enforce safety standards, require transparency from tech platforms, and handle complaints.A Future-Focused Vision
Moving beyond reactive measures towards a long-term roadmap for safer digital environments.Narrow, Child-Specific Scope
Targeted regulation to protect children and tackle illegal content — without undermining free speech. Free speech groups have indicated support for focused protections on child safety, provided overreach is avoided.”
Quite clearly Makes Sense favours an invasive state-centred approach to the issues.
Conclusion
To Conclude.
The Report is a useful and helpful addition to the literature on online harms and offers concrete and practical solutions that are available now and can be further developed and that do not involve intrusive State activity.
The report proposes several solutions to address the challenges young people face with harmful online content:
Education and Awareness
Develop clear and accessible resources to help young people recognize, report, and manage exposure to harmful content.
Educate young people, parents, caregivers, and educators about legal boundaries and the broader harms of harmful content, such as desensitization, trauma, and the promotion of harmful behaviours.
Raise awareness about illegal material, its risks, and where to get help or report content through social media campaigns targeting both young people and their parents.
Support for Young People
Equip young people with the skills to identify and manage harmful online content responsibly.
Provide practical guidance that empowers young people to handle situations independently, with adult support available if necessary.
Foster open, non-judgmental communication between young people and adults to encourage seeking help without fear of criticism or punishment.
Resources for Parents, Caregivers, and Educators
Create practical tools and strategies for parents to keep their whānau safe online.
Offer tailored resources for educators and youth workers to better understand and respond to harmful content.
Provide "train the trainer" sessions for those working directly with young people to discuss these issues and offer meaningful support.
Improved Content Moderation and Reporting
Advocate for more effective and user-friendly reporting tools on social media platforms.
Encourage platforms to take stronger action in moderating harmful content and ensuring user safety.
Community and System-Wide Efforts
Collaborate with domestic law enforcement agencies and international networks to limit the availability of extremely harmful material.
Use hashing technology to share information about objectionable material without sharing the original content, improving efficiency and clarity in classification processes.
Support efforts to prevent young people from being drawn into harmful online spaces or behaviors that could put them or others at risk.
None of these solutions involve the intrusive or the heavy hand of the State. These solutions aim to reduce exposure to harmful content, provide better support for young people, and create safer online environments through education, prevention, and collaboration.
And if some of the content which is potentially objectionable crosses the legal threshold for objectionability, the Classification Office has powers under the Films Videos and Publications Classification Act to deal with it.