Introduction
This is an article about a Bill enacted by the Australian Parliament in late November 2024.
The Bill has been misrepresented by the news media. It is suggested that the Bill bans people under 16 years of age from accessing social media platforms. It does not.
The Bill requires some but not all social media platforms to have an age verification protocol in place by November 2025 to ensure that people under the age of 16 cannot access those platforms.
The paper starts with a background overview of the Bill including a summary of the scope of the Bill and its requirements.
For a proper understanding of the Bill some context and background are necessary because the Bill, rather than being a stand-alone piece of legislation, is an amendment to existing legislation.
I then examine the Bill in detail and discuss some of the principle operative sections and how they will work.
I shall make some observations on the Bill and consider the concept of “social licence” under which platforms are said to operate. I discuss the various elements of this concept because the element of social licence underpins the rationale for the Bill.
I shall discuss what the Bill does not do because it is clear that the mainstream media either misunderstands the thrust of the Bill or, in seeking a dramatic headline, his misrepresented the approach adopted by the Bill.
I shall consider the challenges that the Bill provides for the Platforms and conclude the substantive part of this article with a review of the way in which some of the Nedw Zealand media have characterized the Bill and the advocacy of some for similar legislation in New Zealand. I explain why, given legislative structure, this is not possible at this time, even if it were considered desirable. It is my opinion it is not.
I conclude the article with some observations on regulation in the Digital Paradigm and how easy it is to become confused with the targets of such regulation
The Online Safety Amendment (Social Media Minimum Age) Bill 2024.
The Online Safety Amendment (Social Media Minimum Age) Bill (SMMA) was introduced on Thursday November 21 2024.
It proposes amendments to the Online Safety Act 2021 to require social media platform providers to take measures to prevent children who have not reached a minimum age from having accounts on age-restricted platforms.
Thus it is not a stand-alone piece of legislation but fits within the scheme of and is an amendment to the earlier Online Safety Act 2021.
The Bill was passed within a week of its introduction. It had its second and third readings on Wednesday 27 November 2024 in the House of Representatives. On that date it was introduced and read for the first time in the Senate.
On Thursday November 29 2024 the bill passed both Houses of Parliament after the House of Representatives agreed to the Senate's amendments to the bill. The bill passed its third reading in the Senate on November 28, 2024.
An Overview of the Bill
The Bill places the burden on social media platforms to take reasonable steps to prevent Australians under the age of 16 from having social media platform accounts. It is to be emphasized that the burden is on platform providers and not young people or their parents.
The age requirements will apply to those age restricted social media platforms defined int the Bill which includes Snapchat, TikTok, Facebook, Instagram, X and others.
The Australian Government claims that the Bill is responsive to the ever-evolving nature of technology, while enabling continued access to messaging, online gaming, and services and apps that are primarily for the purposes of education and health support – like Headspace, Kids Helpline, Google Classroom and YouTube.
Importantly the Bill makes it clear that no Australian will be compelled to use government identification (including Digital ID) for age assurance on social media. Platforms must devise reasonable alternatives to users.
A period of 12 months has been prescribed before the Bill comes into effect. This will allow time for social media platforms to develop and implement required systems.
A Summary of the Scope of the Bill
The bill applies to age-restricted social media platforms which are defined as electronic services that meet the following conditions:
· the sole purpose, or a significant purpose, of the service, is to enable online social interaction between two or more end-users;
· the service allows end-users to link to, or interact with, some or all of the other end users;
· the service allows end-users to post material on the service; and
· such other conditions (if any) as are set out in the legislative rules.
The bill clarifies that an electronic service is not an age-restricted social media platform if none of the material on the service is accessible to, or delivered to, one or more end-users in Australia or if the service is specified in the legislative rules. Further, the bill defines an age-restricted user as an Australian child who has not reached the age of 16.
A Summary of Requirements Under the Bill
Under the bill, a provider of an age-restricted social media platform must take reasonable steps to prevent age-restricted users from having accounts with the age-restricted social media platform.
Additionally, the bill provides privacy protections for information collected by social media platforms to determine whether or not an individual is an age-restricted user. Specifically, the bill requires social media platforms to destroy such information after using or disclosing it for the purposes for which it was collected.
The Context of the Bill
As noted the Bill is an amendment to the Online Safety Act 2021 (OSA). It adds to and enhances the existing provisions of that Act.
The OSA gave powers to an E-Safety Commissioner to address harmful behaviour and toxic content online.
In addition the Act made online service providers more accountable for the online safety of those who use their services.
It sets out a clear set of expectations for online service providers that makes them accountable for the safety of people who use their services.
The Act also requires industry to develop new codes to regulate illegal and restricted content. This refers to the most seriously harmful material, such as videos showing sexual abuse of children or acts of terrorism, through to content that is inappropriate for children, such as high impact violence and nudity.
In summary the OSA:
· creates anAdult Cyber Abuse Scheme for Australians 18 years and older
· broadens the Cyberbullying Scheme for children to capture harms that occur on services other than social media
· updates the Image-Based Abuse Scheme that allows eSafety to seek the removal of intimate images or videos shared online without the consent of the person shown
· gives eSafety new powers to require internet service providers to block access to material showing abhorrent violent conduct such as terrorist acts
· gives the existing Online Content Scheme new powers to regulate illegal and restricted content no matter where it’s hosted
· brings app distribution services and search engines into the remit of the new Online Content Scheme
· introduces Basic Online Safety Expectations for online service providers
· halves the time that online service providers have to respond to an eSafety removal notice, though eSafety can extend the new 24-hour period.
The office of the E-Safety Commissioner has significantly wider powers than that of the Approved Agency under the Harmful Digital Communications Act 2015.(HDCA) For example the Commissioner may require removal of content by issuing an eSafety Removal Notice. No such power lies with the Approved Agency under the HDCA but rests with the Court who must take into account a number of factors including rights under the New Zealand Bill of Rights Act 1990.
The OSA targets platform providers rather than individual creators of content. This is another difference between OSA and HDCA.
The OSA prescribes what the Government expects from online service providers.
These expectations are designed to help make sure online services are safer for all people to use. They also encourage the tech industry to be more transparent about their safety features, policies and practices.
The Basic Online Safety Expectations are a broad set of requirements that apply to an array of services and all online safety issues. They establish a new benchmark for online service providers to be proactive in how they protect people from abusive conduct and harmful content online.
eSafety expects online service providers to take reasonable steps to be safe for their users - to minimise bullying, abuse and other harmful activity and content and to have clear and easy-to-follow ways for people to lodge complaints about unacceptable use.
The Minister for Communications, Urban Infrastructure, Cities and the Arts can determine the expectations for certain online services. eSafety then has the power to require online service providers to report on how they are meeting any or all of the Basic Online Safety Expectations.
The Basic Online Safety Expectations are backed by civil penalties for online service providers that do not meet their reporting obligations.
eSafety will also have the ability to name online service providers that do not meet the Basic Online Safety Expectations, as well as publish statements of compliance for those that meet or exceed expectations.
The OSA requires industry to develop new codes which care mandatory and apply to various sections of the online industry such as
social media platforms
electronic messaging services
search engines
app distribution services
internet service providers
hosting service providers
manufacturers and suppliers of equipment used to access online services and people who install and maintain equipment.
The codes, when registered, can require online service providers and platforms to detect and remove illegal content like child sexual abuse or acts of terrorism. They can also put greater onus on industry to shield children from age-inappropriate content like pornography.
The Act allows eSafety to impose industry-wide standards if online service providers cannot reach agreement on the codes, or if they develop codes that do not contain appropriate safeguards.
In many respects these provisions are similar to those the subject of the Department of Internal Affairs Safer Online Service and Media Platforms proposals.
The Act provides a list of examples the industry codes may deal with. These include that:
· all segments of the industry promote awareness of safety issues and the procedures for dealing with harmful online content on their services
· online service providers tell parents and adults who are responsible for children how to supervise and control children’s access to material they provide on the internet
· online service providers tell users about their rights to make complaints
· online service providers follow procedures for dealing with complaints in line with their company policies.
Codes would be enforceable by civil penalties and injunctions to make sure online service providers comply.
The Provisions of the Bill
I have described the structure of the OSA to provide a context for the Amendments proposed by the SMMA Bill.
The focus of the SMMA Bill is upon creating age restrictions for certain social media platforms and to hold platforms responsible for taking reasonable steps to prevent children who have not reached a minimum age from having accounts.
The SMMA Bill provides additions to Part 4 of the OSA by adding Part 4A which introduces provisions relating to social media minimum age.
The focus is upon Age-restricted social media platforms.
Age-restricted social media platforms defined.
An age-restricted social media platform is defined as follows (s. 63C):
63C Age-restricted social media platform 1
(1) For the purposes of this Act, age-restricted social media platform means:
(a) an electronic service that satisfies the following conditions:
(i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users;
(ii) the service allows end-users to link to, or interact with, some or all of the other end-users;
(iii) the service allows end-users to post material on the service;
(iv) such other conditions (if any) as are set out in the egislative rules; or
(b) an electronic service specified in the legislative rules;
but does not include a service mentioned in subsection (6).
Note 1: Online social interaction does not include (for example) online business interaction.
Note 2: An age-restricted social media platform may be, but is not necessarily, a social media service under section 13.
Note 3: For specification by class, see subsection 13(3) of the Legislation Act 2003.
(2) For the purposes of subparagraph (1)(a)(i), online social interaction includes online interaction that enables end-users to share material for social purposes.
Note: Social purposes does not include (for example) business purposes.
(3) In determining whether the condition set out in subparagraph (1)(a)(i) is satisfied, disregard any of the following purposes:
(a) the provision of advertising material on the service;
(b) the generation of revenue from the provision of advertising material on the service.
(4) The Minister may only make legislative rules specifying an electronic service for the purposes of paragraph (1)(b) if the Minister is satisfied that it is reasonably necessary to do so in order to minimise harm to age-restricted users.
(5) Before making legislative rules specifying an electronic service for 1 the purposes of paragraph (1)(b):
(a) the Minister must seek advice from the Commissioner, and must have regard to that advice; and
(b) the Minister may seek advice from any other authorities or agencies of the Commonwealth that the Minister considers relevant, and may have regard to any such advice.
Services that are not age-restricted social media platforms
(6) An electronic service is not an age-restricted social media platform if:
(a) none of the material on the service is accessible to, or delivered to, one or more end-users in Australia; or
(b) the service is specified in the legislative rules.
Note: For specification by class, see subsection 13(3) of the Legislation Act 14 2003.
(7) Before making legislative rules specifying an electronic service for the purposes of paragraph (6)(b):
(a) the Minister must seek advice from the Commissioner, and must have regard to that advice; and
(b) the Minister may seek advice from any other authorities or agencies of the Commonwealth that the Minister considers relevant, and may have regard to any such advice.
An age restricted user is defined as an Australian child who has not reached the age of 16 years.
It will be noted that there are a number of special rule making powers that are reserved to the Minister. In essence this provision, which is the real machinery of the amendment sets out broad guidelines which will be distilled and crystallised in the future.
Age-restricted social media platforms is a new term introduced by the SMMA Bill. It draws on the existing meaning of ‘social media service’ in section 13 of the Online Safety Act, with a modification to expand the ‘sole or primary purpose’ test to a ‘significant purpose’ test when examining whether a service enables online social interactions between 2 or more users. This definition will not apply to other parts of the Online Safety Act, with the existing definition in section 13 remaining in effect.
While the definition of age-restricted social media platforms casts a wide net, flexibility to reduce the scope or further target the definition will be available through legislative rules made by the Minister for Communications. In exercising the rule-making power, the Minister will be required to seek and have regard to advice from the eSafety Commissioner, and may also seek advice from other relevant Commonwealth agencies. This will ensure that users under the minimum age retain access to platforms that predominately provide beneficial experiences, such as those that are grounded in connection, education, health and support. A rule-making power is also available to provide additional conditions that must be met in order to fall within the definition of age-restricted social media platform. At the same time, a rule-making power is available to respond to emerging technologies and services that are relevant to be considered or captured by the definition.
Achieving these outcomes through disallowable instruments, rather than primary legislation, allows the Government to be responsive to changes and evolutions in the social media ecosystem. Disallowable instruments are open to scrutiny by the Parliament, and therefore subject to Parliamentary oversight to ensure instruments are fit-for-purpose.
In the first instance, the explanatory note to the Bill states that the Government proposes to make legislative rules to exclude the following services from the definition of age-restricted social media platforms:
• Messaging apps
• Online gaming services
• Services with the primary purpose of supporting the health and education of end-users
Platform Obligations Defined
The obligation upon social media platforms to develop the mechanics for ensuring age-restriction compliance is contained in an addition to section 27(1)(q) of the OSA. These requirements are set out in the following insertions. Platforms are required
(qa) to formulate, in writing, guidelines for the taking of reasonable steps to prevent age-restricted users having accounts with age-restricted social media platforms; and
(qb) to promote guidelines formulated under paragraph (qa);
Age-restricted social media platforms must be able to demonstrate having taken reasonable steps to prevent age-restricted users from ‘having an account’. ‘Reasonable steps’ is a standard that has been imposed for the purpose of demonstrating compliance and features in national security legislation, privacy law, and elsewhere in the Online Safety Act.
Regulating the act of having an account will prevent age-restricted users from accessing the content and features that are available to signed-in account holders on social media platforms. This will help to mitigate the risks arising from harmful features that are largely associated with user accounts, or the ‘logged-in’ state, such as persistent notifications and alerts which have been found to have a negative impact on sleep, stress levels, and attention.
The obligation would not affect user access to ‘logged-out’ versions of a social media platform. As an example, the obligation would not affect the current practice of users viewing content on YouTube without first signing into an account. Similarly, Facebook offers users the ability to view some content, such as the landing page of a business or service that uses social media as their business host platform, without logging in.
Penalties
Section 63D provides for a civil penalty for failing to take reasonable steps to prevent age-restricted users having accounts. It states:
A provider of an age-restricted social media platform must take reasonable steps to prevent age-restricted users having accounts with the age-restricted social media platform.
Civil penalty: 30,000 penalty units.
The obligation is framed as preventing age-restricted users from ‘having an account’. This places an obligation on platforms to stop Australian children under 16 from creating and holding an account in their own right, but not from accessing content on the platform, if the content can be accessed in a ‘logged out’ state (i.e. without logging into an account or profile). In honing in on account holding, the obligation seeks to mitigate the harms that arise from addictive features largely associated with the ‘logged in’ state of social media platforms, such as algorithms tailoring content, infinite scroll, persistent notifications and alerts, and ‘likes’ to activate positive feedback neural activity.
The obligation does not preclude a parent or carer from allowing their child to use an account held by that parent or carer.
Section 63D does not prescribe what ‘reasonable steps’ platforms must take. However, it is expected that at a minimum, the obligation will require platforms to implement some form of age assurance, as a means of identifying whether a prospective or existing account holder is an Australian child under the age of 16 years. ‘Age assurance’ encompasses a range of methods for estimating or verifying the age or age range of users. Whether an age assurance methodology meets the ‘reasonable steps’ test is to be determined objectively, having regard to the suite of methods available, their relative efficacy, costs associated with their implementation, and data and privacy implications on users, amongst other things. The outcomes of the Australian Government’s age assurance trial, funded to take place throughout 2024-25, is likely to be instructive for regulated entities, and will form the basis of regulatory guidance issued by the Commissioner, in the first instance.
In addition Section 63D would not preclude a platform from contracting with a third party to undertake age assurance on its behalf. Similarly, it would be open to a platform to enter into an agreement with app distribution services or device manufacturers, to allow for user information to be shared for age assurance purposes (subject to the consent of users and compliance with Australian privacy laws).
Observations on the Bill
Unlike the Harmful Digital Communications Act 2015 (HDCA) whose principal targets are those who post harmful digital content, the SMMA focusses upon platforms.
This is consistent with the approach of the OSA where requiring platforms to address issues of online safety of users is the thrust of the legislation. This is the approach that has been adopted in the UK in the Online Safety Act 2023 which contains a range of measures intended to improve online safety in the UK, including duties on internet platforms about having systems and processes in place to manage harmful content on their sites, including illegal content.
The rationale for this approach is that the business model of social media platforms is to optimize user engagement and the time spent on the particular platform. The view is that while this has an impact on all users of social media it is particularly detrimental to children and young people who may be seen to be more vulnerable to harms associated with the platforms.
Although they are viewed as “digital natives” who are developing within a digital ecosystem, the platforms are seen as providing insufficient protections for young people.
The Bill places the obligation firmly on social media platforms to take reasonable steps to prevent users under the minimum age from holding an account.
The Social Licence
It is argued by proponents of the legislation that this reinforces the growing expectation that platforms operate under a social licence and have a social responsibility for the safety of their users, particularly children and young people.
The concept of a social licence is one that has been developing and is used to justify the regulation of online platforms. It is no more and no less than a construct that has been developed to justify State interference with new communications models.
The reason that I say that is this. Online platforms are no more and no less than a delivery system for content. The platforms themselves do not create the content. That is done by the users. The platforms therefore are content neutral in that they are transmitters of content in the same way that the Postal Service is a transmitter of mail and parcels. It carries no responsibility for the contents of the letters or the parcels.
This idea of the “common carrier” model of Internet Service Providers (ISP) and the developers of platforms which are “bolted on” to the Internet backbone has been reinforced by the development of safe harbours which protect ISPs and developers from liability for transmitting content over their service. An local example of a safe harbour can be seen in sections 23 – 25 of the Harmful Digital Communications Act 2015. Section 24 provides protection for an online content host in respect of any specific content of a digital communication posted by a person and hosted by the online content host if the host follows the process in that section.
The safe harbour and “common carrier” concepts developed because of the vast amount of data that flowed through the servers primarily of ISPs and the enormous difficulties of monitoring the flow of data. Social media platforms have been able to avail themselves of safe harbour protections (as may be seen by the provisions of section 24 HDCA) and the vast quantities of data flowing through their servers makes monitoring very difficult if not impossible to manage.
Many of the large platforms wish to be seen as “good digital citizens”. For this reason they have developed terms and conditions for the use of their services. Many of the large platforms have content moderation policies to ensure that certain content will be removed and where necessary the subscriber who posted that content will have his or her subscription cancelled.
There are problems with internal content moderation policies. One is that many of the policies will be engaged only upon a complaint being made. The moderation policy is not proactive. Another is that the guidelines for content moderation may be set using local or domestic rules for what sort of content may or may not be allowable. For example in the United States a much wider range of content may be allowable arising from the application of First Amendment freedom of speech provisions than in a country which has a less liberal approach to the freedom of expression. A third problem may be that the content moderation policies are often not uniformly applied and may be dependent upon the moderator rather than a consistent approach to a particular type of content. Finally – and I emphasise that these examples are not exhaustive – some content moderation policies are monitored by algorithms rather than by the human eye and brain. This form of moderation does not require a complaint but is an automatic process that scans the servers of the platform provider. Although the algorithms in these days of AI are capable of machine learning the fact of the matter is that they make mistakes. “Innocent” content may be the subject of moderation whereas “offending” content may be missed.
There are two problems with moderation policies. The first is that unless the terms and conditions of service are carefully drafted, it may be that certain legal duties may arise by virtue of moderation alone. This was exemplified in the cases of Cubby v Compuserve 776 FSupp 135 (1991) where an absence of content moderation meant that Compuserve was an “innocent disseminator” of defamatory content and Stratton Oakmont v Prodigy 23 Med LR 1794 (1995) where Prodigy, a family-oriented computer network held itself out as exercising editorial control over content and could not claim an “Innocent disseminator” defence.
The second problem is that my adopting content moderation policies providers move into the murky waters of legal, moral and social responsibility for what goes on in their space. A provider cannot remove an extremist call to bombing an embassy on the grounds of a threat to physical safety without moving into the area of collateral responsibility for other forms of content that may not be as egregiously harmful but may be harmful nevertheless.
Coupled with these difficulties are the business models that the platforms have developed. Primarily it is advertising revenue that the platforms seek and this revenue depends on the engagement of the eyeball with the add (referred to as “views”) It is primarily for this reason that optimization of engagement is the primary objective of the platform, thus keeping the eyeball on the advertising.
It is the commercial side of the provision of a social media platform that attracts the concept of the social licence and provides the strongest rationale for it. If the platforms were simply providing a communications service the situation may be different. But the provision of the communications service, plus the use by the platform of advertising which requires continuing customer engagement, plus the fact that the platform derives a significant amount of its revenue from advertising lends weight to an integrated social and commercial purpose for the platform. By becoming part not only of the communications ecosystem, but also of the commercial ecosystem the reach and influence of the platform goes beyond a suggestion that it is merely a common carrier or an innocent disseminator. It has a wider social and commercial aspect which must therefore attract certain social responsibilities, summed up in the term “social licence”.
The social licence concept arises from a blend of ethical, legal and societal obligations which can be summarized as follows:
Public Trust and Legitimacy: Internet platforms rely on users for their success. To maintain public trust, they must operate in ways that align with societal values and norms, including prioritizing user safety.
User Retention: A platform perceived as unsafe risks losing users, advertisers, and investors. Ensuring safety is essential for sustaining their business model.
Non-Regulatory Expectations: A social license isn’t granted by law but rather by public approval. If platforms fail to protect users, they risk losing this informal but essential endorsement, leading to reputational harm or user abandonment.
Accountability in a Digital Society: As key players in the digital economy, platforms influence discourse, commerce, and social interactions. Society expects them to act responsibly and foster safe environments, reflecting their critical role.
The social licence carries with it certain ethical responsibilities which can be summarized as follows:
Duty of Care: Platforms create spaces where billions interact. With this power comes an ethical obligation to minimize harm, such as cyberbullying, misinformation, harassment, and exploitation.
Benefiting from Users: Internet platforms monetize user interactions through advertising, subscriptions, or data analysis. As beneficiaries of user engagement, they bear a moral obligation to protect those contributing to their success.
Vulnerable Populations: Internet platforms are often the setting for harmful activities, including child exploitation, online radicalization, and scams. They have unique capabilities to prevent harm due to their access to data and control over their ecosystems.
Amplification of Influence: The algorithms and policies of these platforms can amplify content, including harmful or divisive materials. This influence heightens their responsibility to ensure that their platforms promote positive societal outcomes.
Setting the Age
The explanatory note to the Bill makes the following points:
The Bill sets a minimum age for users to hold a social media account to protect, not isolate, young Australians.
Young people’s use of social media is a complex issue and there is currently no clear and agreed age at which children can safely use social media. No two children’s experiences on social media are the same.
Social media services vary greatly in their primary purpose and design features, and therefore present a different level of risk to end-users. Children also vary substantially in how they use social media, including which platforms they access, the content and communities they engage in, and the digital features they are exposed to.
The current minimum age of access under the Terms of Service of all major social media services is 13 years. This stems from the 1998 decision by the United States (US) Congress in the Children’s Online Privacy Protection Act, which prohibits websites from collecting information on children younger than 13 years without consent.
This legislation predates the existence of social media and is not an indication that these services are safe to use at this age or based on any evidence that 13 years is an appropriate age at which adolescents have the capacity to engage safely on these services.
A United Kingdom (UK) study published in 2022, which examined longitudinal data from more than 17,400 participants, found that adolescent social media use is predictive of a subsequent decrease in life satisfaction for certain developmental stages including for girls aged 11 to 13 years old and boys 14 to 15 years old.
Further, advice from the US Surgeon General states that social media exposure during this period of brain development warrants additional scrutiny. A minimum age of 16 allows access to social media after young people are outside the most vulnerable adolescent stage.
During the development of the Bill, the Department of Infrastructure, Transport, Regional Development, Communications and the Arts (the Department) conducted extensive consultation with young people, parents, mental health professionals, legal professionals, community and civil society groups, state and territory first ministers and industry representatives. Preferences for the minimum age for social media typically ranged from 14 to 16 years old, with some support for 18 years old.
Parents and carers feel unsupported to make evidence-based choices about when their children should be on social media and many are overwhelmed by pressure from their children and other families.
This is supported by a recent survey conducted by the eSafety Commissioner, with 95 per cent of caregivers reporting that children’s online safety is the hardest parenting challenge they face. Setting a minimum age removes ambiguity about when the ‘right’ time is for their children to engage on social media and establishes a new social norm.
This last rationale is perhaps the least satisfactory. It suggests that as far as social media use is concerned parents are disempowered or unable to carry out their basic parental functions. There is a suggestion underlying this that parents are afraid that they cannot be friends with their children. This is hardly a sound parenting approach. It is the duty of a parent to mentor and guide a child. It is the duty of the State to provide the educational tools to equip a child for later life. It is not the duty of the State to assume parenting duties or responsibilities that rest with parents within the family context.
What the Bill does NOT do
The Bill introduces an obligation on providers of an age-restricted social media platform to take reasonable steps to prevent age restricted users from having an account with the platform.
The onus is on platforms to introduce systems and processes that can be demonstrated to ensure that people under the minimum age cannot create and hold a social media account.
As the onus is on platforms, there are no penalties for age-restricted users who may gain access to an age-restricted social media platform, or for their parents or carers.
Thus the Bill does NOT, contrary to much media commentary, ban young Australians from accessing social media.
The Challenge for the Platforms
The definition of an "age-restricted" platform covers services that allow users to post material, to interact with two or more people, and to interact with some or all other end-users. Australia's government expects that TikTok, Facebook, Snapchat, Reddit, Instagram and X will meet that definition and therefore have an obligation to verify users' ages.
How they do it is up to them. The government will soon run an age verification trial that it thinks will be instructional, and has promised that citizens will be able to use social media without having to show ID – but the details of how the obligation will be implemented are not known.
Social media platforms have until late 2025 to implement their "reasonable steps" or face significant fines. Some have already expressed concerns about the bill – arguably reinforcing the government's arguments about its necessity.
However the government has already hinted that the law has another motive - a signal for social media services to offer better protections for users.
Prime minister Anthony Albanese recently stated: "We know some kids will find workarounds, but we're sending a message to social media companies to clean up their act."
And in a fillip for the government, the law passed two days after TikTok announced it is trialling tech to detect users who are under 13, and restricting the use of some appearance effects for under 18s. So perhaps complying with Australia's law won't be so hard.
The New Zealand Response
It should be observed is that New Zealand lacks a piece of framework legislation like the OSA to which age-restrictions for social media may be bolted.
The proposals by the Department of Internal Affairs set out in the Safer Online Services and Media Platforms discussion paper were heading for a similar model with a single regulator, a framework that involved the development of Codes of Practice (agreed or imposed by the Regulator) and a structure similar to (but not identical with) that of the OSA or the UKOSA.
Thus the calls for a similar approach in New Zealand seem to be based on the principle that young people should not (for whatever reason) be able to access social media rather than a detailed understanding of the legislation and its statutory context.
When the proposals were announced there was some reaction.
Netsafe CEO Brent Carey highlighted the fact that more effort should be applied to the management of screentime and the best way to use the amazing information resource provided by the Internet.
Just because a child or young person doesn’t have a phone at school or social media at home assumes they are shielded from online risks. And that looks at the matter from a negative perspective because there are positive online experiences - connecting with causes, getting help, demonstrating skills, learning, or finding communities they identify with online.
The debate should be about helping young people develop the skill set to self-regulate, manage distractions and make informed decisions. Digital literacy is a fundamental part of a young person’s education.
The Chief Censor, Caroline Flora, has also expressed a view observing that critical thinking and open conversations are important in preventing and addressing harm in this context.
Even the Australian e-Safety Commissioner recognized that a ban may drive young people underground or to locate other platforms with which to communicate.
Dr. Eric Crampton of the NZ Initiative sees problems in a universal state based identification verification system although that is not proposed in the Bill. Nevertheless it could be on the horizon.
Celia Robinson, on the other hand, in the Herald for 15 September criticizes Mr Carey’s approach. She lumps him in with what she disparagingly refers to as “a noisy few” (which I gather includes the Chief Censor) and suggests that Netsafe has a conflict of interest because it has funding from some of the social media platforms. Netsafe has always been transparent about its funding and does a fine job in its role as the Approved Agency under the Harmful Digital Communications Act which warrants not a mention in Ms Robinson’s op-ed.
I gather that as well as being critical of Mr Carey’s approach, Ms Robinson favours the Australian proposals or some other form of heavy-handed paternalistic state intervention. Interestingly journalist and broadcaster Heather du Plessis Allen supports the Australian initiative.
Ms Robinson wrote about Gen Z’s mental health crisis in June of 2024 in the Herald. Her concerns were about social media and links to suicide and self-harm among young people.
She referred to research that had been carried out by Dr Samantha Marsh who published a study on children’s screen use in Aotearoa. More on Dr Marsh shortly.
Ms Robinson stated as follows:
“While some argue further education for parents is needed to reduce or prevent smartphone usage among children, the potential for harm makes this situation similar to the regulation of alcohol, tobacco, and driving.
Therefore, I believe smartphones should be banned for individuals under 16 years old. By implementing such a ban, we can protect the mental health and cognitive development of adolescents during these crucial years. Sixteen may sound arbitrary but between the ages of 10 and 16, children experience significant brain development.”
Such a ban is supported by Dr. Marsh.
It is important to note that Ms. Robinson is advocating that individuals under the age of 16 should be banned from having smartphones. Although she does not make such a direct statement in her 15 September 2024 article.
It is as important to note that the SMMA does not go as far as Ms. Robinson would like to imagine. It does not ban smart-phone use by under 16 individuals. It does not ban under 16’s accessing social media. In fact it requires platforms to put age verification methodologies in place to prevent under 16’s setting up accounts.
In addition age-restricted platforms will not include every platform available on-line. Under 16’s will still be able to access messaging apps, gaming platforms and YouTube.
The announcement of the passage of the Bill on 27 November created a flurry of media activity.
But even before the passage of the Bill Dr Samantha Marsh was advocating a “no harm” model of regulation, essentially a strategy that eliminates risk, rather than a harm reduction model.
Dr Marsh is a senior research fellow in the Department of General Practice and Primary Care at Auckland University. Her research focuses on child and youth health and wellbeing.
In an article in the Herald for 10 November 2024 Dr Marsh described the harm reduction model as focusing on
“mitigating harm after it occurs. Parents, teachers and decision-makers should carefully consider these “harm reduction” approaches being proposed.”
Dr Marsh addressed the position broadly advocated by Netsafe that we should have open discussions with children and educate them about social media use. She says:
“Another argument is that we need to have open discussions with our kids and teach them to have healthy relationships with social media. Education is important – kids need to know why and how social media causes harm – but education alone is unlikely to change behaviour and, therefore, unlikely to prevent harm.
Teaching people about healthy eating doesn’t stop them from indulging in junk food. We shouldn’t expect anything different from social media education. Education can’t compete with a platform that exploits the vulnerabilities of the teenage brain.
We shouldn’t be giving kids a product that’s been shown to harm their wellbeing and then expect them to use it responsibly or hold themselves accountable for their usage. Just as we don’t expect kids to have healthy relationships with other addictive products, the same should apply to social media.”
Dr. Marsh’s argument therefore is that parental responsibility in this area should shift from the home to the State. She concludes:
“Our kids deserve more than harm reduction – they deserve no harm caused by social media. At the very least, shouldn’t this be what we strive for?”
Hot on the heels of the Australian legislation on 1 December 2024 the Herald ran a story under this headline
“Kiwis want social media banned for young children, poll finds”.
The article is by Katie Harris, described as a “Social Issue Reporter” who clearly has not read the SMMA or if she has she does not understand it.
The thrust of the article, however, is not the SMMA but a poll conducted by Horizon Research in collaboration with the University of Auckland.
The report states that 74% of 1511 adults polled say there should be an age limit for accessing social media, with age 16 the most popular threshold for respondents.
Survey respondents were also asked who should be responsible for ensuring online safety, and more than three-quarters said parents.
Just under 70% also said social media companies, and 56% said the Government.
Now the reporting of the survey results requires a bit of thought because what we are getting is the reporter’s interpretation of those results. I have been critical of the reporting of surveys in the past. Very rarely are we given the precise language of the question to which responses are made.
Let us look at the issue of responsibility for ensuring online safety. We don’t know the language of the question. We don’t know if respondents were asked to rank their options. So without that information the results state are confusing.
Over three-quarters of the respondents (not a figure stating “more than 75% of respondents) said online safety rested with parents. If that is the case, 25% of respondents (assuming the 75% figure is correct) would favour other avenues like social media companies or the government.
But no. Just under 70% (once again a precise figure is not given) believe online safety should be the business of social media.
This means that more people responded than were polled. The issue becomes even more complicated when we find out that 56% of those polled favoured Government intervention.
These are surprising results but they
“ come as no surprise to University of Auckland senior research fellow Dr Samantha Marsh who said parents report knowing social media is harmful but often feel they have to let their children use it.”
A further set of confusing results appear in the article which states:
“Just under three-quarters of survey respondents shared concerns about children being exposed to inappropriate content, 75% about cyber bullying or harassment, 66% about exposure to sextortion and 69% about mental health impacts.
Five percent had no concerns.”
Once again, the question or questions are not stated, there is no raw data, “inappropriate data” is not defined and once again the figures for the responses do not add up.
I should imagine since the research was done with Dr Marsh’s University as a participant that she knows the wording of the questions and is privy (unlike the rest of us) to the raw data.
The newspaper report states that Internal Affairs Minister Brooke van Velden said that creating a legal age limit for social media is not something she is considering implementing in New Zealand. She is aware of the potential harms that can be caused by social media which is why the Government banned school students from using mobile phones while attending school enabling students to concentrate upon their learning.
The article reports that therapist and online safety advocate Jo Robertson is concerned with grooming of children although there are positive aspects to social media. She said it would be naive to think that having an age restriction would take away the risk of children seeing harmful content online, however, they will be less likely to “stumble” across it and she urged parents to consider delaying giving their children devices.
Dr Marsh suggested on Radio NZ (29 November 2024) that New Zealand was open to a social media ban but would want to see how it played out in Australia.
She referred to the fact that platforms had a year to implement their age verification protocols which suggests that she has some awareness of the legislation.
But once again the focus in the stories is on banning under 16’s from social media which is NOT what SMMA is about. There is no language in the Bill which bans an under-16 from accessing social media. The platforms are required to prevent under-16’s accessing their services. There is a big difference and one which mainstream media has yet to grasp.
The confusion continues in a story from Radio NZ dated 30 November headlined
“Ban on teen social media will remove connections for those in marginalised communities - queer activist”
Once again mainstream media misses the nuance of what is proposed and the article states:
“A ban on social media for under-16s will remove connections for those in marginalised communities and prevent them from learning about the world, a queer activist says.
It follows Australia introducing a ban on platforms such as TikTok, Facebook, Instagram and Snapchat this week.”
There is no ban on platforms. The platforms have to set up age verification systems so under 16’s cannot access certain social media platforms. There is NOTHING in the SMMA that specifically bans under 16’s access social media.
The article cross references to an earlier RNZ article dated 29 November which reads:
“Children and teenagers will be banned from using social media from the end of next year after the Australian government's world-first legislation passed the parliament with bipartisan support.”
This paragraph is incorrect. Children and teenagers will not in fact be banned, as I have already argued.
The article then corrects itself by stating:
“That means anyone under the age of 16 will be blocked from using platforms including TikTok, Instagram, Snapchat and Facebook, a move the government and the Coalition argue is necessary to protect their mental health and wellbeing.”
But this only tells part of the story. Who is blocking kids? Is it the State? If you are still with me you will know the answer – yes, the obligation is on the platforms to set up age-verification systems so that under 16’s will be unable to access their platforms.
Is New Zealand likely to implement a similar proposal? Prime Minister Christopher Luxon told the New Zealand Herald in September he was "up for looking at all of that" (referring to a social media minimum age), while Labour leader Chris Hipkins said he was open to looking at a similar approach in New Zealand, provided there was evidence it would make a difference.
As I have already suggested, the structure that the Australians have put in place is founded on the OSA which has been in place for some years. The SMMA becomes part of the structure of that legislation.
New Zealand does not have a similar legislative structure, nor is it likely to have one given the current Government’s abandonment of the Safer Online Services and Media Platforms work by the Department of Internal Affairs.
So it may be that Ms Celia Robinson, Dr Samantha Marsh and others who would suggest that parents are incapable of parenting their children and that the State should assume part of that role will no doubt continue to write their fearful op-eds pointing to the Decline of the Civilisation and Childhoods End if the “Guvmint” doesn’t “do something”.
Conclusion
Marshall McLuhan coined the aphorism “The Medium is the Message” perhaps his most famous and yet opaque statement – which emphasises the importance of understanding the way in which information is communicated.
According to McLuhan, we focus upon the message or the content that a medium delivers whilst ignoring the delivery system and its impact. In most cases our expectation of content delivery is shaped by earlier media. We tend to look at the new delivery systems through a rear view mirror and often will seek for analogies, metaphors or concepts of functional equivalence to explain the new medium that do not truly reflect how it operates and the underlying impact that it might have.
“We become what we behold. We shape our tools and thereafter our tools shape us” is his second aphorism that summarises the impact that new media may have. Having developed the delivery system, we find that our behaviours and activities change. Over time it may be that certain newly developed behaviours become acceptable and thus underlying values that validate those behaviours change. In the case of information delivery tools, our relationships with, expectations and use of information may change.
McLuhan’s first aphorism is that content alone does not cause these modifications. My suggestion is that it is the medium of delivery that governs new information expectations, uses and relationships. How does this happen? One has to properly understand the tool – or in the case of information communication, the medium – to understand the way in which it impacts upon informational behaviours, use and expectations.
It would be foolish to suggest that content was not front of mind for the Australian Government when it enacted the SMMA. But at the same time it concentrated on the delivery system – the social media platforms – looking to them to devise a solution for what was seen to be an unhealthy pre-occupation with social media on the part of the under-16’s.
In this respect it seems, almost by accident that the Australians with the SMMA and the parent legislation the OSA have perhaps obtained an echo of an understanding of McLuhan’s “medium is the message” aphorism.
That said what perhaps has not been understood and as an example of McLuhan’s “rear view mirror” approach are the views of Prime Minister Albanese, Celia Robinson and Dr Marsh, among others.
Digital technologies have introduced a paradigm shift, especially in the area of communications, our expectations and our use of information. Many of the assumptions the older generation may have about correct behaviour and values are neither valid nor reasonable for a generation of digital natives. No better example may be found than Albanese's statement: "Parents want their kids off their phones and on the footy field So do I. We are taking this action because enough is enough."
This is clear evidence of a total lack of understanding about the impact digital technologies have had on values and behaviour. The fresh-air values of Albanese's generation may no longer have relevance to digital natives. What is seen as addictive use of devices by his cohort is seen as communication by digital natives. This ambition to force a new generation to adopt behaviours of an older one crumbles in the face of a paradigm change.
In my view the matter rests with parents and within families. Tot homines quot sententiae as the Romans used to say – as many as there are people there are points of view – and that is very true with parenting.
Rather than talking about preventing people from maximizing the opportunities for communication – and the Internet and the Digital Paradigm have introduced a revolution in the means available to communicate - the debate should be about helping young people develop the skill set to self-regulate, manage distractions and make informed decisions.
Deny them access to communications platforms and you deny them the opportunity to develop necessary skill sets for the future.
POST SCRIPT
I hesitate to draw attention to one such as Paddy Gower - a shock jock if there ever was one - but an article he wrote for Stuff published 5 December epitomizes the lack of understanding of the SMMA.
In it he says:
In my view, New Zealand needs to make taking action against social media companies a priority. Get our best scientists and tech brains in on this together, create a social media regulator - yes, a government agency that can start enforcing rules - and get on with it.
Get alongside Australia with an Anzac ban on social media for our kids. Fight the social media giants together with our Aussie mates. (My emphasis)
I don’t know how many times I have to say this but kids are not banned from using social media. The problem seems to be that the media has made up its own message about the SMMA - and it is the wrong one. As for Gower - well, need I say more?
Censorship rather than open communication is always dubious. In 1990sxa college where I tsught seniors had stidents habd in their cellphones to the office and collect after classes. Teaxhers patrolled local Park in breaktimes with cellphones ( drug deals were a factor). This was Christchurch. Psrt of our teaching was about online content and being discriminating. Computers ( used in class) had a block on certsin content. Dud not stop students accessing restricted sites! Parental and other adult overviews and discussion sround online content simplostically seems sensible...if only. I would definitely not be in favour of government ( not known for wise or informed decisions)further restricting digital access. Already platforms especislly fbk censor, delete, excoriate posters for content especially sex rrealist views as 'against community standards' quite ridiculous and s woorying aspect of a blinkered political ideology and so motivated.
David, do you ever sleep?
By now I don't think anyone at all rational believes the media reporting in anything other than "soundbites", so is it any wonder that, en masse, they've fastened on to a convenient shock shorthand in describing the bill. I'm almost sure that few of them have actually read it. Equally, I suspect that anyone rational will see the impossibility of the social media platforms being able to formulate a workable response to the new law.