Legislating for Mis/Disinformation - Part 2
Stopping the Flow of Contrarianism - Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.
This is Part 2 of a two-part series considering the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.
In Part 1 I introduced the topic and described generally how the proposed regulator - Australian Communications and Media Authority (ACMA) – operates.
The Bill extends the powers of ACMA and addresses the issues of misinformation and disinformation and how this should be dealt with in certain circumstances.
After a discussion of the Bill and its proposed operation I critique the Bill and offer some conclusions about it.
The Bill
The Underlying Policy
In January 2023, the Albanese government committed to providing the ACMA with new powers to create transparency and accountability around the efforts of digital platforms to combat mis- and disinformation on their services, while balancing the freedom of expression that is so fundamental to a democratic society.
There were public concerns about information quality and especially misinformation and disinformation. Concern had risen on the part of those surveyed by News and News Media Research Centre, University of Canberra to 75%. The Australian Media Literacy Alliance report on adult media literacy released in August 2024 highlighted that 80 per cent of Australians want the spread of misinformation in Australia to be addressed.
Up until the introduction of the Bill the digital platform industry had taken an important first step to address the threats posed by the spread of harmful mis- and disinformation online through the development of the voluntary Australian Code of Practice on Disinformation and Misinformation.
But according to the Government this effort was not enough.
Work done by the ACMA highlighted the need for industry to improve the quality of its monitoring and reporting against the voluntary code's outcomes, noting that a robust performance measurement framework is critical to its success.
The ACMA found that the transparency reports made under the voluntary code lacked consistent, trended, Australia-specific data on the effectiveness of digital platforms' efforts to address mis- and disinformation on their services.
In its 2023 report to government, the ACMA called on the industry to take further steps to review the scope of the code and its ability to adapt quickly to technology and service changes.
The code has only nine signatories— major digital platforms like X and Telegram were not signatories meaning there are wide gaps in coverage across the digital platform industry.
It was the view of the Government that digital platforms needed to step up to protect Australian users from the threat of seriously harmful mis- and disinformation online.
The bill therefore seeks to strengthen the voluntary code by providing a regulatory backstop.
An Overview
The Bill in Brief
The bill establishes a proportionate, graduated and flexible regulatory framework, while, according to the Government, at the same time safeguards the freedom of expression.
The bill also ensures that it is digital platforms that remain responsible and accountable for the content they host and promote to Australian users.
The bill empowers the ACMA to review the effectiveness of digital platform systems and processes. This is intended to improve transparency about measures platforms have in place to protect citizens from mis- and disinformation on their services.
Digital platforms will be required to publish their current media literacy plan setting out the measures they will take to enable users to better identify mis- and disinformation.
This will empower Australian users to critically engage in the content they view on digital platforms, identify and respond to mis- and disinformation and to make more informed choices about how they engage with content.
Digital platforms will also be required to publish their current policy approach in relation to mis- and disinformation as well as the results of their risk assessments that identify and assess significant risks relating to mis- and disinformation on their services.
The bill will also allow the ACMA to create digital platform rules with additional transparency obligations, including in relation to media literacy plans, risk management plans and complaints and dispute handling processes.
Under the bill, the ACMA would have the power to approve codes and make standards to compel digital platform service providers to prevent and respond to mis- and disinformation.
A code or standard could include obligations to cover matters such as reporting tools, links to authoritative information, support for fact checking and demonisation of disinformation. Approved codes and standards will be legislative instruments subject to parliamentary scrutiny and disallowance.
These powers could be used in the event that the ACMA determines that existing industry efforts to combat mis and disinformation on digital platform services do not provide adequate protection for the Australian community.
In the event industry efforts to develop or implement an approved code have not been effective, or in urgent and exceptional circumstances, the ACMA would have the power to make an enforceable standard.
This is part of what is described as the proportionate and graduated nature of the bill's framework.
Importantly, the bill will enable the ACMA to require digital platforms to be tough on disinformation involving inauthentic behaviour such as bots or troll farms. This type of manipulative behaviour has been a major vector of foreign interference and is an ongoing threat to democracies across the world.
The bill will enable the ACMA to use a proportionate, graduated and risk based approach to non-compliance and enforcement. This may include the ACMA issuing formal warnings, remedial directions, infringement notices, injunctions as well as pursuing civil penalties, depending on the severity of the action.
Digital platforms may be subject to civil penalties of up to five per cent of global turnover for breaches of a standard and up to two per cent for codes. These penalties are high. However, they may be necessary in response to egregious and systematic breaches and failure to act.
To protect freedom of speech, the bill sets a high threshold for the type of mis- and disinformation that digital platforms must combat on their services—that is, it must be reasonably verifiable as false, misleading or deceptive and reasonably likely to cause or contribute to serious harm. The harm must have significant and far-reaching consequences for Australian society, or severe consequences for an individual in Australia.
The bill does not apply to professional news content or content that could be regarded as parody or satire. It also does not apply to the reasonable dissemination of content that is for academic, artistic, scientific or religious purposes.
Nothing in the bill enables the ACMA themselves to take down individual pieces of content or user accounts.
The bill takes a system-level approach, and digital platforms will remain responsible for managing content on their services.
Aspects of the Bill
In this section I shall consider certain aspects of the Bill and deal with them in summary form. The Bill is extensive and rather than do a section by section analysis I considered a summarised thematic approach would assist in understanding the Bill
Purpose
The purpose of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 is to:
Empower the Australian Communications and Media Authority (ACMA) to require digital communications platform providers to take steps to manage the risk that misinformation and disinformation on digital communications platforms pose in Australia.
Increase transparency regarding how digital communications platform providers manage misinformation and disinformation.
Empower users of digital communications platforms to identify and respond to misinformation and disinformation on these platforms.
The Bill aims to set a high and targeted threshold for the definition of misinformation and disinformation, focusing on content that is verifiably false, misleading, or deceptive and causes or contributes to serious harm.
Misinformation and Disinformation Defined
The Bill defines misinformation and disinformation as follows:
Misinformation:
Dissemination of content using a digital service is considered misinformation if it meets all four of the following criteria:
1. Contains information that is reasonably verifiable as false, misleading, or deceptive.
2. Is provided on the digital service to one or more end-users in Australia.
3. Is reasonably likely to cause or contribute to serious harm (as defined in the Bill).
4. Is not excluded dissemination (such as parody, satire, professional news content, or reasonable dissemination for academic, artistic, scientific, or religious purposes).
Disinformation:
Dissemination of content using a digital service is considered disinformation if it meets the same four criteria as misinformation and one additional criterion - that is that there are grounds to suspect that the person disseminating the content intends to deceive another person, or the dissemination involves inauthentic behavior (such as using automated systems to mislead end-users about the identity, purpose, or origin of the content).
Serious Harm
The issue of “serious harm” is critical to the involvement of ACMA.
The bill defines "serious harm" as the dissemination of content on a digital communications platform that is reasonably likely to cause or contribute to one of the following types of harm, with significant and far-reaching consequences for the Australian community or a segment thereof, or severe consequences for an individual in Australia:
1. Harm to the operation or integrity of an electoral or referendum process thus undermining the right of Australians to participate in public affairs and vote in genuine elections.
2. Harm to public health in Australia including harm to the efficacy of preventive health measures.
3. Vilification of a group in Australian society – for example - based on race, religion, sex, sexual orientation, gender identity, intersex status, disability, nationality, or national or ethnic origin, or vilification of an individual because of a belief that the individual is a member of such a group.
4. Intentionally inflicting physical injury to an individual in Australia – for example content that provokes public hatred or anger towards an individual, leading to intentional physical harm.
5. Imminent damage to critical infrastructure or disruption of emergency services for example content that could cause significant damage to essential systems and services.
6. Imminent harm to the Australian economy including harm to public confidence in the banking system or financial markets.
The Bill emphasizes that the harm must be of a severity that has significant and far-reaching consequences for the community or severe consequences for an individual.
The Steps ACMA May Take
ACMA can take several steps against digital communications platforms that publish misinformation or disinformation. These steps include:
1. Formal Warnings:
ACMA can issue formal warnings to digital communications platform providers if they are found to be in contravention of the provisions related to misinformation and disinformation.
2. Remedial Directions:
ACMA can issue written directions requiring the platform provider to take specific actions to ensure compliance with the regulations and to prevent future contraventions.
3. Infringement Notices:
ACMA can issue infringement notices for contraventions of designated provisions. These notices serve as an alternative to court proceedings and specify a penalty amount.
4. Civil Penalties:
ACMA can apply to the Federal Court for civil penalty orders against platform providers that contravene civil penalty provisions. The penalties can be substantial, especially for body corporates.
5. Information Gathering:
ACMA can require platform providers to provide information and documents related to misinformation and disinformation on their platforms. This includes making and retaining records and preparing reports.
6. Publishing Information:
ACMA can publish information about the platform's non-compliance on its website, which can serve as a public deterrent and encourage compliance.
7. Approval and Registration of Codes:
ACMA can approve and register enforceable misinformation codes developed by industry bodies. If these codes are not adequate, ACMA can determine misinformation standards.
8. Determining Misinformation Standards:
In cases where misinformation codes are inadequate or not developed, ACMA can determine and enforce misinformation standards for the industry.
9. Handling Complaints and Disputes:
ACMA can make rules requiring platform providers to implement and maintain processes for handling complaints and resolving disputes about misinformation and disinformation.
These steps are designed to ensure that digital communications platform providers take proactive measures to manage and mitigate the risks associated with misinformation and disinformation, thereby protecting the Australian community from serious harm.
ACMA and Codes
ACMA also has a significant role in approving misinformation codes which involves several key responsibilities and steps:
1. Assessment of Representation:
ACMA must first be satisfied that the body or association developing the code represents a particular section of the digital platform industry.
2. Applicability and Scope:
ACMA must ensure that the code applies to participants in the relevant section of the digital platform industry and deals with one or more matters relating to the operation of digital communications platforms by those participants.
3. Submission of Code:
The body or association must provide a copy of the code to ACMA.
4. Evaluation Criteria:
ACMA must be satisfied that the code:
Requires participants to implement measures to prevent or respond to misinformation or disinformation.
Enables assessment of compliance with these measures.
Is reasonably appropriate and adapted to achieving the purpose of providing adequate protection for the Australian community from serious harm caused or contributed to by misinformation or disinformation.
Goes no further than reasonably necessary to provide that protection.
5. Consultation:
ACMA must be satisfied that there has been consultation with participants in the relevant section of the digital platform industry and with the public. The consultation period must be at least 30 days.
ACMA must also ensure that at least one body or association representing the interests of consumers has been consulted about the development of the code.
6. Approval and Registration:
If satisfied with the above criteria, ACMA may approve the code or part of the code by written notice.
The approved code is then deemed to be a legislative instrument, subject to parliamentary scrutiny and disallowance, and must be registered on the Federal Register of Legislation.
ACMA is responsible for lodging the approved code and an explanatory statement, including a statement of compatibility with human rights, for registration.
7. Publication and Maintenance:
ACMA must also register the approved code on its own electronic Register of misinformation codes and standards.
8. Review and Variation:
ACMA can approve variations to the code following a similar process of assessment and consultation.
ACMA can revoke a misinformation code or a provision of a code if necessary.
By fulfilling these roles, ACMA ensures that misinformation codes are developed, approved, and enforced in a manner that effectively addresses the risks of misinformation and disinformation on digital communications platforms, while also ensuring transparency and accountability within the industry.
The Role of Platforms in Managing Misinformation
The role of digital platform in managing misinformation involves several key functions and responsibilities, as outlined in the Bill
1. Risk Management:
Digital platform rules can require digital communications platform providers to update their assessments of risks related to misinformation and disinformation at specified times or under certain circumstances.
Providers may also be required to have risk management plans that state the steps they are taking to address identified risks.
2. Media Literacy Plans:
Rules can mandate that providers update their media literacy plans, which outline measures to help end-users better identify misinformation and disinformation.
Providers may need to assess and report on the effectiveness of their media literacy tools.
3. Complaints and Dispute Handling:
Digital platform rules can set requirements for complaints and dispute handling processes for misinformation complaints.
Providers may be required to publish or provide information about their complaints processes and responses to misinformation complaints.
4. Record Keeping and Reporting:
Rules can require providers to make and retain records related to misinformation and disinformation on their platforms, as well as measures taken to prevent or respond to it.
Providers may need to prepare and submit reports to the ACMA based on these records.
5. Transparency Obligations:
Providers must publish their current policies or policy approaches to managing misinformation and disinformation, as well as reports on risk assessments and media literacy plans.
Additional types of information may be specified in the digital platform rules for public disclosure.
6. Exemptions and Specific Requirements:
The ACMA can use digital platform rules to exempt certain low-risk platforms from specific transparency obligations.
Rules can also specify additional conditions for different types of digital communications platforms.
7. Enforcement and Compliance:
Digital platform rules provide a framework for the ACMA to enforce compliance through formal warnings, remedial directions, infringement notices, and civil penalties.
The rules ensure that providers take necessary actions to manage misinformation and disinformation effectively.
By establishing these rules, the ACMA aims to create a structured and transparent approach to managing misinformation and disinformation on digital communications platforms, ensuring that providers are held accountable and that the Australian community is protected from serious harm.
In addition the Bill provides that digital communications platform providers are required to publish several key pieces of information about misinformation as follows:
1. Risk Assessment Report:
Providers must publish a report on the outcomes of an assessment of risks related to misinformation and disinformation on their platform. This report should include risks arising from both the design or functioning of the platform and the use of the platform by end-users.
2. Policy or Policy Approach:
Providers must publish their current policy or policy approach in relation to managing misinformation and disinformation on their platform. This can include a single comprehensive policy or multiple policies addressing different aspects of misinformation and disinformation.
3. Media Literacy Plan:
Providers must publish a current media literacy plan that outlines the measures they will take to enable end-users to better identify misinformation and disinformation on the platform. This plan should help users identify the source of content, especially content that purports to be authoritative or factual.
4. Additional Information Specified in Digital Platform Rules:
Providers may be required to publish other types of information specified in the digital platform rules, excluding source code. This could include details about the effectiveness of measures taken to combat misinformation and disinformation.
5. Updates to Published Information:
If providers update their risk assessments, policies, or media literacy plans, they must ensure that the most recent versions are made publicly accessible.
6. Exemptions:
Providers are not required to publish protected information (e.g., trade secrets or commercially valuable information), personal information, or information that might cause a significant security vulnerability or increase misinformation or disinformation.
By making this information publicly accessible, digital communications platform providers enhance transparency and accountability in their efforts to manage misinformation and disinformation, thereby helping to protect the Australian community from serious harm.
What Protections are there for Freedom of Expression?
The Bill 2024 several protections to ensure that measures to combat misinformation and disinformation do not unduly infringe on freedom of expression:
1. High Threshold for Serious Harm:
The Bill sets a high and targeted threshold for defining misinformation and disinformation, focusing only on content that is verifiably false, misleading, or deceptive and likely to cause or contribute to serious harm.
2. Exclusions for Certain Types of Content:
The Bill explicitly excludes professional news content, content that would reasonably be regarded as parody or satire, and the reasonable dissemination of content for academic, artistic, scientific, or religious purposes.
3. Focus on Systems and Processes:
The measures in the Bill focus on the systems and processes of digital communications platform providers rather than regulating individual pieces of content. This approach aims to manage the risk of misinformation and disinformation without directly censoring specific content.
4. No Requirement to Remove Content or Ban Accounts:
The Bill states that nothing in Part 2 of Schedule 9, or in any digital platform rules, approved misinformation codes, or misinformation standards, can require digital communications platform providers to remove content or ban an account, except in cases of disinformation involving inauthentic behavior.
5. Privacy Protections:
The Bill includes provisions to protect the privacy of end-users, such as prohibiting the requirement to make or retain records of the content of private messages or VoIP communications.
6. Proportionality and Necessity:
The Bill requires that any measures taken must be reasonably appropriate and adapted to achieving the purpose of providing adequate protection from serious harm and must go no further than reasonably necessary to provide that protection.
7. Review and Oversight:
The Bill mandates regular reviews of the operation of Part 2 of Schedule 9, including an assessment of its impact on freedom of expression. These reviews will help ensure that the measures remain necessary and proportionate.
8. Human Rights Compatibility:
The Bill includes a statement of compatibility with human rights, ensuring that any limitations on freedom of expression are consistent with international human rights law, which allows for restrictions that are necessary and proportionate to achieve a legitimate objective, such as protecting public health, public order, or the rights of others.
These protections aim to balance the need to combat harmful misinformation and disinformation with the fundamental right to freedom of expression, ensuring that any restrictions are justified, necessary, and proportionate.
Critiquing the Bill
One has to wonder at the independence of the ACMA in making the recommendations that it did that led to the Bill. Could their criticisms of the platforms have been a step towards “mission creep” that would mean that they would gain extended powers if the Bill were enacted?
When I looked at the structure of the Bill the model seemed to me to resemble in many respects the proposals that were put forward by the Department of Internal Affairs (NZ) in their Safer Online Services and Web Platforms consultation.(SOLSWP).
A further common factor is that SOLSWP was instigated at the behest of a Labour Government. The Australian Bill has been introduced by a Labour Government although there is opposition support for it and indeed, as reported in the Guardian it was the Morrison Government that responded to the report from the ACMA and just before the Federal election promised to boost the regulator’s powers.
In the SOLSWP consultation the target of the regulatory activity proposed was the platforms.
Codes were to be devised by the various platforms that qualified.
A Regulator was proposed which would oversee Code development and subsequent compliance. The Regulator could substitute its own Code if development was unsatisfactory.
The SOLSWP proposal had some more draconian outcomes for non-compliance than proposed in the Australian Bill.
By contrast the focus of the Bill is directed to a precise target – misinformation or disinformation that may result in serious harm as defined in the Bill.
Once again platforms are the target. In both the Bill and SOLSWP the platforms as disseminators of content had a defined role which challenged the concepts of net neutrality and safe harbour that had developed in the early days of the Internet.
It may well be that the phenomenal size, growth and market power of the platforms has made them a target for governments and regulators although that said attempts to regulate publishers and disseminators of content have been the approach of the authorities from the inception of the printing press.
The way in which the relationship between mis/disinformation and serious harm must give some cause for concern. The serious harms stated in the Bill are all related to aspects of State security. There can be no doubt that untrue or erroneous information that may cause harm to the electoral process is a legitimate State interest in a democracy.
Information which involves the vilification of groups or which might result in physical injury carries with it elements of “hate speech”. It would certainly be an example of linguistic mission creep if the terms misinformation or disinformation were to become synonyms for “hate” speech
Similarly, the harm to public health could clearly be used to shut down the type of contrarian arguments that circulated during the pandemic.
Indeed when one considers the themes that lie behind the serious harms as defined, it is quite clear that the protection of the State, the State’s interest and the way in which the State communicates its message lie behind the harms and in that sense, in an apparently indirect way the State could use the Bill if enacted to control messaging. Rather than target the dissidents or contrarians, the State could shut down the dissemination of the message by platforms.
There is a word for this and it is censorship.
One of the consequences of censorship is that it hinders public debate by favouring orthodox views. One wonders whether or not misinformation has a place in an environment where the consequences may be so draconian, especially given that the boundaries between opinions and misinformation are frequently very blurred. Thus the regulation of misinformation would be better excluded so that unintended sharing of false information is not captured.
The Bill provides exemptions for media and government, and it could be argued that these exemptions serve little purpose other than to insulate media and the State from the provisions of the Bill. Given the scepticism and declining trust in the media along with similar scepticism regarding the trustworthiness of the State, these exemptions could give carte blanche to media and the State to propound whatever information they liked. This means that a higher standard of communications conduct is expected of platforms than of the exempted bodies.
On the other hand there should be exemptions that allow individuals to engage in good faith debate and discussion and for those views to be published and promulgated. This would ensure an additional protection for the freedom of expression.
Importantly, actions rather than ideas should be regulated.
Australia already has a sophisticated system of laws that regulate harmful actions, such as damage to property, vilification, disruption of public order, harm to democratic processes, environmental harm, and economic harm. One example of existing criminal laws that already address harm caused by misinformation, is damage to 5G telecommunications infrastructure, without needing additional regulation of ideas.
Regulating actions is more effective in preventing harm than regulating information or ideas. Regulating information and ideas will ultimately lead to censorship, which is detrimental both to democracy and society.
Focusing on regulating actions rather than ideas will prevent harm more effectively and avoid the negative consequences of censorship.
The COVID-19 pandemic provided ample examples of the danger of determining truth by orthodoxy (or what the official information was at the time). There were many instances of social media platforms removing or shadow banning information from prominent scientists for disagreeing with the dominant theory or narrative, but in many cases the theories that were removed from social media platforms as misinformation soon became the dominant theory or vindicated as correct.
Removing information that contradicts the government or official narrative is most concerning, not least because it prevents course correction where the position taken by an authority is wrong. It undermines the quality of decision making and diminishes public trust.
The social media platforms have also been known to remove accounts and information based on the popular narrative within the company in question. Given that these companies are mostly headquartered in the US, this has led to an increased dominance of American ideas and discourse in Australia.
One unfortunate example of this, put forward by the Free Speech Union of Australia is the Twitter ban placed on Australian Associate Professor Holly Lawford-Smith for challenging the idea that being a woman is an identity unrelated to biological sex.
This idea is a subject for intense and legitimate debate in much of the world and particularly by feminists that are concerned about the erasure of sex-based rights. The removal of feminists like Lawford-Smith was an intervention by the big tech platforms that unfairly favoured one side of a contentious issue that requires legitimate public debate
By judging truth on the prevailing orthodoxy (which is inevitably how they will make such a determination), digital media platforms will be distorting important public debates by giving preference to “the status quo” over new information or competing ideas. This will likely have the effect of atrophying our knowledge and limiting access to certain cross-sections of the public, especially those less connected to existing debates (e.g. people without a University education).
One of the most egregious aspects of this Bill is the preferential treatment it offers to legacy institutions and existing media organisations by exempting them from these regulations, whilst simultaneously heavily regulating new entrants to the media landscape. It is unfair for regulation to treat new technologies and media business models differentially to existing media and platforms.
Consider the example of Substack, a new platform that has been an important source of income and independence for many writers and journalists. Substack will be caught twice by the Bill.
Firstly, because it will be regulated as a digital platform it will have to ensure that any content that appears on Substack is not misinformation. Monitoring all of the posts, videos, podcasts released on Substack and any user comments on any content for misinformation and disinformation will be an incredibly costly exercise.
Secondly, anything shared from Substack to another digital platform (like Twitter, now called “X”) will also be caught by this regulation. This increases the risk that either platform will deem the information contained in a Substack post to be misinformation and censor it from the Australian audience - at the very least - if not remove it entirely and block the account that it was shared from.
This Bill places a considerable burden upon new entrants to the media landscape and given the size of the Australian market, many may simply choose not to operate in Australia, which will only serve to disadvantage Australians by depriving them of new media sources and preventing innovation in the digital platform space.
If citizens cannot access the information and platforms that they currently do on the open web, this will merely encourage the use of technology like VPNs. Some platforms may even reconfigure themselves to circumvent the laws so that they are classified as private messaging or email rather than a digital platform, which would not be a difficult adjustment for many.
The reality is that existing technology already allows for the easy circumvention of this proposed regulation by end users. If people want to access information that is classified as misinformation or disinformation there are ample opportunities for them to do so, irrespective of if this Bill is passed. This Bill - as is inevitable from what it attempts to do - simply amounts to red tape for digital businesses, making it less economically viable for them to operate in Australia.
Conclusion
As has been suggested the major unspoken premises behind the Bill are designed to ensure orthodoxy of messaging. If, as suggested, it acts as a discouragement to platforms to do business in Australia it is a simple matter for citizens to access the information offshore. VPNs are one example. There are others. As Charles Clarke said, many years ago “The answer to the machine is in the machine”. This has serious implications both in the context of respect for the law as well as the enforceability and practicality of the law.
The Bill is the result of a consultative process and it may well be that there will be a number of changes to it. Regrettably I do not think that there will be any shifts on misinformation, disinformation or the serious harm test. The State has too much invested in ensuring the integrity of orthodox messaging. The only difficulty is that such messaging may be the only messaging on a topic in which the State has an interest. And that would be a significant erosion of the democratic process and the free exchange of ideas.