Papers and Media - The Public Face of the Disinformation Project - Part 2A
Part 2A - The Papers
In this section of the study I discuss what I call the public face of the Disinformation Project. There are two elements of the “Public Face” – the papers presented by the Project and/or its members and its relationships and dealing with the media.
This part – Part 2a - deals with the papers released by the Project. Part 2b which is published separately looks at the Project’s use of and relationship with mainstream media.
The material covered in this section of the study has been sourced from the Project itself via its webpage with two exceptions which are noted where appropriate.
I shall consider the papers in the order of their publication or presentation. I shall summarise the content of each paper together with a brief commentary.
A deeper analysis of the content from a thematic perspective will be dealt with in the third part of the study.
The Papers and Presentations
The papers and presentations to be considered are listed as follows:
Evaluating the infodemic: assessing the prevalence and nature of COVID19 unreliable and untrustworthy information in Aotearoa New Zealand’s social media, January-August 2020 - 6 September 2020[1]
When worlds collide: addressing harm, hateful and violent extremism, and disinformation in Aotearoa New Zealand – Kate Hannah – 16 June 2021[2]
Reimagining responses to extremism: The importance of context culture and community – Sanjana Hattotuwa – 16 June 2021[3]
Mis- and Disinformation in Aotearoa New Zealand from 17 August to 5 November 2021, by Kate Hannah, Sanjana Hattotuwa, and Kayli Taylor – 9 November 2021[4]
The murmuration of information disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest, by Kate Hannah, Sanjana Hattotuwa, and Kayli Taylor – 5 May 2022[5]
The Common Good or the Tragedy of the Commons? Social cohesion, trust, and the impact of misinformation, by Kate Hannah – 8 June 2022[6]
Hate speech in Aotearoa New Zealand: Reflecting and resisting – Kayli Taylor – 18 June 2022[7]
Eroded information ecologies: Social Cohesion, Trust and the Impact of Misinformation - Kate Hannah NZ International Science Festival - 14 July 2022[8]
Dangerous speech, misogyny, and democracy: A review of the impacts of dangerous speech since the end of the Parliament Protest – 22 August 2022[9]
Transgressive transitions Transphobia, community building, bridging, and bonding within Aotearoa New Zealand’s disinformation ecologies March-April 2023 – 5 May 2023[10]
The Commentary
Evaluating the infodemic: assessing the prevalence and nature of COVID19 unreliable and untrustworthy information in Aotearoa New Zealand’s social media, January-August 2020
This paper has a number of authors of whom Kate Hannah is one. It is in a document which carries the banner of Te Punaha Matatini. Significantly a highlighted rider on the paper states that it has not yet undergone formal peer review. Nothing on the material available on the Disinformation Project website suggests that this has changed.
The paper’s abstract suggests that COVID 19 led to an “infodemic” characterized by mis and disinformation as well as conspiracy theories. Although these are linked to international patterns local material shows significant “differential themes and impacts.”
The paper evaluates the prevalence of the infodemic in social and mainstream media, the nature of COVID 19 narratives and changes in the discourses with which the narratives engage. The paper points to an increasing prevalence in conspiracy narratives and assesses impact of these untrustworthy narratives and their sources including narrators.
The paper has a 6 page bibliography. There are no footnotes but there is the occasional citation of resources “in-line” in the social sciences style.
Appendix A to the paper discusses the nature of “conspiracy theories” and sets out a taxonomy of the theories advanced.
The paper helpfully defines its terms and discusses what it means by misinformation, disinformation and mal-information along with a discussion of the term “infodemic” – a term which, it appears, originated with the World Health Organisation.
An infodemic is
“an over -abundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it”
The paper discusses the data collection methodology and categorises the information, primarily thematically but also in terms of narrative and meta-narrative which it describes as overarching narrative or tropes.
The themes identified in the narrative include distrust in government. A lack of trust in the state is present in both situated experiences of the pandemic and within known conspiracy discourses. The paper notes that a convergence of these are to be expected. It should, however, be noted that distrust of government is not new.
The argument goes that because of the extraordinary restrictions imposed by the Government it was essential to maintain public acceptance of Government actions. It would be counterproductive to allow contrarian or anti-government narratives to cloud the message, so distrust in government – while a normal although minority part of life in NZ – had to be demonised
Other common themes included those regarding the origin of virus, a denial of existence of virus along with health and wellbeing narratives grounded in rejection of mainstream advice.
The paper contains some interesting tables identifying the type of misinformation and associated themes along with an identification of the “spreaders”. One of those identified is the broadcaster Mike Hosking.
The paper identifies the types of misinformation or disinformation along with conspiracy theories but it provides little information to say why this is misinformation and what the “truth” might be.
For example the following assertion is made
Many of these conspiracy theories centre upon claims that disinformation is being disseminated to public about the pandemic. Coupled with the prevalence of narratives that predict or anticipate the presence of unreliable and/or untrustworthy information such as misinformation, disinformation, mal-information and conspiracy theories in media, political, and civil society discourses, reliable and trustworthy discourses are tarnished.
However little attempt is made to rebut the theories – rather they are described and left as conspiracies without actually saying why. Similarly with misinformation and disinformation as categorized. There is an assumption that the information is wrong without saying why it is wrong. All that is provided is a recitation of who has spoken on behalf of the scientific and public health information without actually saying WHAT was said that makes what contrarians are claiming as in fact disinformation with all the perjorative elements that accompany that word.
The conclusion to the paper suggests that it is a preliminary study which establishes a computational methodology and process for on-going monitoring of the prevalence of mis- and dis-information, and conspiracy narratives, within New Zealand’s social and mainstream media ecosystems.
It then sets out what it suggests should be next steps including building diurnal sentiment prevalence into the monitoring, as well as addressing a number of known limitations, including the Anglophone focus of the data, access to more social media platforms, and the impact of a ‘top-down’ estimation of prevalence computationally given that the exposure of each individual to the located narrative cannot be revealed via prevalence, and evidence that echo-chambers effect the ways in which we access and engage with information online - in essence a more effective monitoring system.
Some of the conspiracy theories are identified as follows
Origin and release theories
5G Covid Conspiracy Theories
Endgame conspiracy theories
• forced vaccination regimes
• microchipping of the population
• changing social mores (such as encouraging mask wearing in order to normalise the introduction of Sharia law in future)
• instituting lockdowns in order to imprison the population.
Election Conspiracy Theories
Fake Pandemic Conspiracy Theory
Hidden-suppressed cure theory
This paper restricts itself to a focus which is to discuss misinformation and disinformation in the context of the COVID-19 pandemic. It suggests that “something must be done” and raises some suggestions for where the focus of further enquiries might be.
When worlds collide: addressing harm, hateful and violent extremism, and disinformation in Aotearoa New Zealand by Kate Hannah.
Nine months later Kate Hannah and Sanjana Hattotuwa presented papers to He Whenua Taurikura - New Zealand’s Hui on Countering Terrorism and Violent Extremism.
The language and approach signals a change from the September paper discussed above. There is a subjectivity and passion to both papers that is not present in the earlier academic discussion and casts the paper more as a polemic than a solid evidence based academic discussion.
Ms Hannah’s paper begins with an assertion referring to the Doctrine of Discovery which is claimed to be the foundation of nearly every institution and structure in Aotearoa New Zealand.
These underlying and often invisiblised structures – imperialism, colonisation, white supremacism, misogyny, Islamophobia, homophobia, antisemitism – also underpin the human and technical mechanisms of life online. From the founding conceits of social media platforms to objectify and humiliate women to the biases of engineers embedded in algorithms, the digital world reflects the structural and systemic violence towards ‘the other’ which forms the basis of the physical world we inhabit.
The Doctrine of Discovery is a fallacy that has been exposed by Professor Paul Moon in “The Historicity of the Doctrine of Discovery in New Zealand’s Colonisation” (2022) Te Kaharoa, 15(1)[11].
Hannah then goes on to place disinformation within the scope of hateful and violent extremism. It seems that she is suggesting that disinformation is an element that derives from the imperial project. This signals a significant amount of “mission creep” from the original objectives of the Disinformation Project with was to address COVID-19 related mis and disinformation.
She identifies the targets of online harm, hateful or violent extremism and disinformation and suggests that they are the focus of disinformation which is being presented to others, largely Pākehā or ‘white’ migrant communities and that these marginalized communities are blamed for social and political turmoil.
Efforts to re-assess and revise understandings of the fundamental and ongoing impacts of colonisation, and begin to realise justice within the parameters of Te Tiriti o Waitangi, are being intentionally framed as sites of national controversy.
She then suggests that the Treaty of Waitangi, and the partnership relationships the realisation of Treaty justice enables, are the necessary starting point for any discussion or development of a strategy which seeks to address and make redress for the impacts of online harm, hateful and violent extremism, and disinformation for New Zealand.
She starts by accounting for the past which demands an emotional connection.
Hannah then makes it clear that objectivity is not a factor in this approach. “
As a researcher, my emotions form an aspect of my ability to ethically engage with the material I study, and to ethically engage with the communities who are most effected by the milieu of that study – the internet. This understanding of emotionality highlights “the ethical obligations of our role as witnesses and storytellers…implicated in the production of meaning through our witnessing, through our storytelling, through the political engagements of our research as it goes into the world”
In this way Hannah anticipates criticism of her approach based on a lack of objectivity. This emotional and subjective engagement with the sources will be discussed in greater detail in the third part of the study.
She discusses how the digital world enables these ideas, some ancient, some newly developed, to be shared easily, and then the technical mechanics of recommender algorithms, “parameterization”[12], content feeds, and ‘engagement’ increase the volume and reach of hateful and violent ideas, expression, image, language, and meme.
She refers to the rich data collection approach of the Disinformation Project which reveals patterns of movement of narratives and features of narratives from sub-platforms and closed groups through to mainstream social media platforms and then into media and socio-political discourse, and vice versa.
She provides an example and locates the patterns of disinformation surrounding He Puapua within the framework of systemic racism which translates to calls for violence against Maori as a group and specific Maori individuals, and goes on to discuss the types of hateful speech identifying racism, the targeting of trans women and the LGBTQIA + groups.
She concludes by observing
This starts with a reckoning with the fundamental and ongoing impacts of the imperial project, some elements of which are already underway. In practise, this will look like mediation of the digital world, including its structures, particularly by those targeted and blamed communities; moderation by communities and via co-created platform guidelines for online spaces and the infrastructures that underpin them; regulation co-developed with communities and responsible to communities; and, finally, classification or censorship, within fundamental principles of Te Tiriti o Waitangi and the Universal Declaration of Human Rights. With these varied tools, communities, civil society, the media, academia, the public service, industry, and Government all have roles to play, in connection with, and in partnership with, each other.
In this final discussion Ms Hannah has made it very clear that whatever free speech is permissible must be within the model proposed. The final suggestion of classification or censorship envisages a wider role for the Censor’s office but there seems to be a complete avoidance of any discussion surrounding freedom of expression which is incorporated into the UDHR. Hannah seems content to cherry pick the elements that support her argument.
What is curious is the intersection of restrictions on the digital world with the fundamental principles of the Treaty. It is difficult to see how those principles have anything to do with classification or censorship.
The themes of critical thinking, critical race theory and concerns about white supremacy permeate this paper. The thinking behind it is entirely consistent with Hannah’s 2017 credo. It is important to understand this, not necessarily to invalidate the argument, but to identify the foundation stones for it.
Reimagining responses to extremism: The importance of context culture and community by Sanjana Hattotuwa
This is the second of the papers presented to the Hui on countering terrorism.
In many respects this is a personal paper, detailing some of the responses that the writer had to violent conflict in Sri Lanka before coming to New Zealand.
Dr Hattotuwa refers to his doctoral research looking especially at the role, reach and relevance of Facebook, Twitter and social media in simultaneously fuelling and quelling socio-political violence.
This research, which is unreferenced, includes how online content is inextricably entwined with and informed by offline developments including but not limited to communal riots, significant political unrest, high-casualty terrorism, and consequential electoral moments.
Thus he suggests a correlation between online activity and real world disorder.
Dr. Hattotuwa questions the utility of traditional responses to online extremism. These “traditional responses” see legislative instruments, laws, the codification of regulations and punitive measures as adequate, desirable or definitive responses for what he describes as disinformation’s Hydra-headed entrenchment, expanding at pace.
He then personalizes the argument suggesting that informed by his own lived experience, activism, and research, he studies online data in situ[13], seeing digital interactions as inextricably entwined with local cultures, histories, communities, media ecologies, political cultures, anxieties and aspirations.
He then takes a deeper dive into subjectivity suggesting that disinformation goes to the heart of who we are, what we believe in, love to do, and why. It is an existential inquiry and exercise, not (just) a digital study or phenomenon. By its very nature, disinformation is socio-technological, being offline in nature as much as it is increasingly online in nurture.
It follows that disinformation requires systems or lateral thinking to grasp, beyond technocratic or bureaucratic frames. While appreciating their role, he argues that we must be sceptical of all legal or legislative responses to what are essentially, and will remain, socio-political problems present in online and offline forms, simultaneously.
Dr Hattotuwa argues that official policies, laws and regulatory frameworks will never address what he describes as the heterogenous assemblage of actors and platforms intent on undermining democracy, for two reasons.
First, they have time on their side, and work towards intended outcomes years if not decades into the future using a combination of electoral, political, social and cultural means, over offline and online vectors.
Secondly, the essential naïveté of social media companies, allowing till recently politicians to get away with inciting hate and violence results in, amongst other things, outdated and outmoded oversight, placing at risk communities who are often already marginalised, and have violence directed against them.
He adopts Hannah’s suggestion of the UDHR and looks at online extremism in a context of a generation of violence and hate through what he terms, but does not define, broader ecological perspectives.
He then personalizes the argument -
“This perspective, congruent with my own experience and research including representations of violence and prosocial responses on social media in Sri Lanka and Aotearoa, New Zealand, turns on its head current approaches to countering extremism, largely based on enhanced or increased regulation, legal and legislative means. Recalling the Christchurch Commission Report’s emphasis on social cohesion, we must imagine a more grounded, ecological and inter-disciplinary approach to research and response.”
Thus he emphasizes the suggestion that existing regulatory structures are inadequate to deal with the problem as he perceives and describes it.
Dr Hattotuwa concludes that enlightened socio-political and technological responses need to imagine stronger, more representative, endogenous and indigenous frameworks against threats to democracy in online and offline fora.
Although the apparent thrust is the preservation of democracy, what he and Hannah would prefer is the moulding of democracy into an image that they would like. This they do not say directly but it arises by implication. They fail to recognize that by its nature democracy is noisy and chaotic and it is these elements that they would like to eliminate.
Mis- and Disinformation in Aotearoa New Zealand from 17 August to 5 November 2021, by Kate Hannah, Sanjana Hattotuwa, and Kayli Taylor
This paper is accompanied by a summary document entitled Understanding mis- and disinformation in Aotearoa New Zealand (Summary)
This is the first paper authored by all the members of the Disinformation Project. It is dated 9 November 2021. It is issued under the logo of Te Punaha Matatini but has not been peer reviewed.
The paper tracks mis and disinformation following the Delta Outbreak and a shift to Covid Alert Level 4. It contains a bibliography of some 11 references.
This paper brings together two themes from earlier papers – mis\disinformation and “extremist” content together with the COVID-19 Infodemic and once again extends the remit or mission statement of the Disinformation Project to view the spread of mis/disinformation as located within a context of far-right political ideologies.
In addition to repeating definitions of misinformation, disinformation and mal-information, a further category is added – that of dangerous speech.
It is axiomatic in any discussion that the definition of terms will direct the way in which the argument flows. The paper therefore discusses the definition of “dangerous speech” and arrives at a final formulation that is significantly wider than the starting point. The way in which dangerous speech is defined is as follows:
Dangerous speech is used to classify material observed and analysed within the ecologies of conspiracy theories. The assertion adopted in the paper is that “dangerous speech is any form of expression (e.g. speech, text or images) that can increase the risk that its audience will condone or participate in violence against members of another group.”[14]
In another opaque paragraph towards the end of the paper the following comment appears
So while we use the definition, the hallmarks, and the wider framework – the message itself, the audience, the historical and social context of the message, the speaker, and the medium used – we have, in response to the proliferation of ‘dangerous speech’ content present in Aotearoa New Zealand’s mis- and disinformation ecosystem, expanded the definition to consider violence as articulated against individuals as representative of groups, particularly in the case of clearly gendered or racialised ‘dangerous speech’.
One wonders how it is that the remit that was the original basis for the Disinformation Project has manged to extend to include “dangerous speech” as the Project has defined it other than to note that it is an element of mis or disinformation.
It is difficult to see the connection other than what the Project terms ‘dangerous speech’ within its definitional framework has occurred as part of the data gathering for mis and disinformation. However, none of this data is available and once again the reader is faced with a series of assertions.
The Project’s extension of “dangerous speech” goes significantly further than the traditional definition which suggests that there must be a form of incitement of an immediate risk of physical harm and which has been the subject of a discussion in an earlier paper of mine[15].
The Project’s paper suggests that the development of the response to the COVID pandemic has resulted in highly visible potent symbols used to push various far right and conservative ideologies such as:
- Gun control
- Rural land rights and 1080
- Maori sovereignty and water/lad rights
- “Free speech”
- Faith (Christian evangelical or Pentecostal),
- abortion,
- euthanasia,
- cannabis law reform,
- families and family structure,
- LGBTQIA+ rights, including conversion therapy,
- immigration,
- race, and gender.
The paper identifies the various platforms.
In a discursive discussion there are a number of assertions made that regrettably are not supported by evidence. Examples are:
Quantitative analysis around the mis- and disinformation volume (amount of content), vectors (platforms and apps content is produced and shared on) and velocity (speed at which content is produced) since mid-August, under the Delta Level 4 lockdowns, is unprecedented. We note that it is by order of magnitude more than the content seed and spread over 2020, and even in the first half of 2021.[16]
No supporting data is provided.
a circadian rhythm associated with mis- and disinformation content production across the public Facebook Pages, Groups, Instagram accounts and Telegram channels studied. Our research clearly flags the degree to which there is pattern to each day’s production of content, with peaks in the morning, afternoon and evening. These peaks drive engagement throughout the day, and for a longer period each day.
Apart from the interesting use of a somewhat obscure metaphor – a characteristic of Disinformation Project papers – once again there is no data other than an assertion that it exists. The circadian rhythm metaphor could possibly be substituted with “chronological sequence” which would achieve the same descriptive result but with greater clarity.
Reference is made to memes and the use of a “word cloud” as a demonstration of a research tool. This process does not appear to be in any way validated. While a “word cloud” produces an attractive graphic it is no substitute for empirical and verifiable data.
The article concludes as follows:
The ecologies and spread of mis- and disinformation point to a broader threat: that Covid-19 and vaccination are being used as a kind of Trojan Horse for norm-setting and norm-entrenchment of far-right ideologies in Aotearoa New Zealand. Such ideologies include, but are not limited to, ideas about gun control, anti-Māori sentiment, anti-LGBTQIA+, conservative ideals around family and family structure, misogyny, antiimmigration. Mis- and disinformation and ‘dangerous speech’ pose significant threats to social cohesion, freedom of expression, inclusion, and safety.
Thus the problems of mis and disinformation have moved from the context of COVID and have become part of a right-wing ideological plot. One is forced to infer, from the language that is used in this paragraphy, that there is another conspiracy theory which the Project might find if it were to look in a mirror.
To conclude on this paper the final sentence is an assertion without any foundation, evidence or support.
The Murmuration of Information Disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest, by Kate Hannah, Sanjana Hattotuwa, and Kayli Taylor
May 2022
This is the first of a number of papers that explores mis and disinformation and the Project’s interpretation of events within the context of the Parliamentary Protest of February-March 2022. In many respects it continues some of the themes explored in the August to November 2021 study.
It carries a reference to Te Punaha Matatini but unlike earlier papers lacks the logo. There is no suggestion that the paper has in any way been peer reviewed.
The paper refers to the Parliament Protest and asks how did New Zealand end up here – overlooking the fact that there have been earlier occasions of violent disorder in protest in New Zealand history – the 1951 Waterfront Strike and the 1981 Springbok Tour Protests come to mind.
The paper expands on themes, tipping points, and tactics first described in the November 2021 paper and how they were reflected at the Parliament Protest. It sets out the methodology but is vague as to the detail of the process. It identifies the platforms studied. The paper concludes with implications for social cohesion, and outlines steps forward that may be taken.
Like earlier papers there are sweeping generalisations and conclusions which are unsupported in any way by evidence or by verifiable data. For example it suggests that there is a level of organization in the spread of disinformation – strategic messaging and tactical messaging.
The Project sees both strategic and tactical disinformation messaging constantly at play in New Zealand, aimed at (anti-democratic) long-term socio-political change as well as more immediate results through heightened tactical content production.
There is no evidential foundation for this assertion nor is there any data to support it. The Project claims that throughout the protest, from 6 February to 2 March, TDP studied data tens of millions of posts and comments on Meta/Facebook product and platform surfaces alone, alongside hundreds of hours of live-streamed footage, tens of thousands of tweets, hundreds of YouTube videos, and tens of thousands of posts and comments on Telegram. The total volume of material studied was much greater, embracing websites, and multimedia material hosted on alternative platforms.
Beyond this assertion there is no verifiable information to support it.
The paper also continues the shift from Covid mis and disinformation to a critique of a far-right conspiracy that underpins the spread of disinformation. At one stage the Project uses some of the events of the Protest to retrospectively validate earlier assertions – the “I told you so” school of reasoning. The following is an example:
Simultaneously, however, the Convoy’s original focus on mandates was becoming increasingly less significant and giving way to far-right narratives of individuals and groups who used the protest as an opportunity to radicalise people, erode social cohesion, and push forward their own parochial agendas. Our warning in November 2021 that anti-vaccination and Covid-19 mis- and disinformation were being used as a Trojan Horse for the norm-setting of far-right ideals was fully realised during the Parliament protest.
However there is no evidence to support the assertion in the final sentence. The “Trojan Horse” metaphor appears in subsequent Project papers and seems to become something of a favourite.
To be fair it should be noted that on occasion the Project does provide what it claims is supportive data. To actually access the data requires stripping away the rhetoric surrounding it. For example:
The Parliament Protest was the single greatest offline accelerant to engagement around and growth of Facebook Page and Group based mis- and disinformation ecologies since TDP’s focus on information disorders at the onset of Alert Levels 4 and 3 lockdowns caused by outbreaks of the Delta variant of Covid19 in our communities. This pattern held with Instagram as well, which is a significant issue. In many countries and contexts, Instagram is used by a much younger demographic than Facebook Pages and Groups who may not understand the harms they are exposed to because of following, trusting and sharing updates from mis- and disinformation accounts. The duration of the protests saw 88,900 new followers to Instagram mis- and disinformation accounts studied by TDP. To put that into perspective, February alone saw more follower growth than September 2021 to January 2022 combined.
Two graphs show visual views of information classified as mis or disinformation. In addition TDP uses the method of word clouds - graphically attractive but of little verifiable use - to highlight commonly used phrases in posts to Instagram.
One element that is discussed is that of the Misinformation Dozen who make their appearance in this paper.
The graph [shows]Facebook interactions around a cluster of mis- and disinformation Pages studied, twelve protest figureheads, and mainstream media from 6 February to 3 March. Aotearoa New Zealand’s ‘misinformation dozen’ on Facebook were responsible for a considerable proportion of posts and engagement during the Parliament Protest. On 2 March alone, 73% of interactions in the mis- and disinformation ecology were generated by just a dozen accounts.
Footnote 29 takes us to Toby Manhire’s article "Figureheads and Factions: The Key People at the Parliament Occupation"[17].
The graph is not very clear but shows a spike in C-19 Misinformation on 2 March along with a spike in what it describes as material from protests figureheads. The graph is meant to support the assertion that the increased volume of misinformation originated from the unidentified protest figurehead.
There is nothing from this evidence that would suggest that the information came from a dozen accounts nor that it accounted for 73% of the interactions.
I shall discuss the implications of the particular issue in more detail in the third part of this study.
The focus shifts to what is referred to as “international conspiratorialism” together with a shift to pro-Putin commentary and association with QAnon, pro-Trump and pro-MAGA channels
The assertion is made – once again unsupported by any evidential foundation -
Language imagery, and framing features within narratives have become increasingly violent and anti-social, as we noted in August-November 2021. Language at the Parliament Protest once again reveals the entrenchment of violent expression, misogyny, and other hallmarks of dangerous speech as the norm. Increasing violence, conspiratorialism, and a divergence between the sentiments and attitudes offered by mis- and disinformation producers and mainstream media were notable.
The Project was concerned that the protest was becoming a hotbed of violent extremist thought and action.
The Project considered the implications of the factors it discussed. It suggested that Mis- and disinformation in New Zealand continued to work to create shifts in New Zealand’s social and political norms.
There is no evidential support for this assertion
Key mis- and disinformation producers affirm and promote an idea of New Zealand that pulls away from progressive values of social inclusion, justice, and equity that are increasing in social and political discourse. Instead, they long for systems that promote New Zealand European identity, traditional gender roles, and a patriarchal family structure.
This is an extraordinary observation that suggests there is only one acceptable set of values. In characterizing contrarian views as misinformation and disinformation there is the associated support of the progressive values of the Left.
This statement makes it clear that mis and disinformation are seen as a threat and as elements of a power struggle between contending sets of ideas. That position completely ignores the reality that there are many contending ideas that may be expressed and that to demonize them as misinformation is not an answer to the arguments put forward.
Indeed, in expressing the position in the way that they do the Project is using mis and disinformation as veto words, thus having to avoid the uncomfortable reality of actually addressing the issue.
The vision of New Zealand held by the Project – and which is threatened by contrarian ideas - is expressed in what has become the recognisable turgid rhetoric of the Project in the conclusion
Renewing efforts for social cohesion, honouring Te Tiriti o Waitangi, and reflecting critically on our past, our shared present, and our ideas for the future must be the starting point to re-building trust in Aotearoa New Zealand in 2022 and beyond.
The antithesis to social cohesion or conformity is a diversity of ideas and values within society. It seems that the authors view social cohesion as a pre-condition to an ordered society but it would be more of a precondition towards a controlled society. It seems to me that TDP would like to see contrarianism eliminated or sidelined.
The summary paper concludes:
The way we talk with, and about, one another matters, including how we perceive and negotiate differences. Everyone deserves to be talked about in ways that uphold their dignity. The Parliament Protest, and the language that emerged from it, challenge this. Disinformation highlights differences and divisions that can be used to target and scapegoat, normalise prejudices, harden us-versus-them mentalities, and justify violence. Disinformation and its focus on social division are at risk of cementing increasingly angry, anxious and antagonistic ways around how we interact with one another, eroding social cohesion and cooperation.
This has dangerous implications for our individual and collective safety.
This suggests a level of dialogue that TDP would like to see in the future and by implication contains serious threats to freedom of expression.
The Common Good or the Tragedy of the Commons? Social cohesion, trust, and the impact of misinformation, by Kate Hannah
8 June 2022
Speakers Science Forum 2022 – Trust, misinformation and social in(ex)clusion
The Royal Society page contains a summary of Kate Hannah’s speech. The text of another speech covering the same ground with footnotes appears on the Disinformation Project Resources page. This speech was delivered to the New Zealand International Science Festival 14 July 2022
Given that both papers cover the same ground I intend to deal with them in the one commentary.
The theme underlying the papers is the importance of social cohesion and the way that contrary views – characterized as mis or disinformation – threaten social cohesion.
The paper starts with what Project does and states that the purpose of the paper is to explore what disinformation narratives mean, suggesting that this is having an effect on our social and political spaces and our shared understanding of the state, democracy and citizenship.
Ms Hannah starts with the Pandemic, and transitions into the suggestion that disinformation is linked to online or physical harm and dissenting or fringe views are related to a number of conspiratorial narratives and hateful or violent expression.
She then goes on to discuss how these relate to narratives and tropes of white supremacy, racism and extreme misogyny. And questions what communities may do to prevent this.
Throughout the paper – a characteristic of much of the later material from the Disinformation Project – is the heavy use of emotive and subjective language which tends to lean the paper away from an objective academic analysis of a problem and towards a polemic.
One of the issues discussed in the paper is that of the importance of social cohesion.
She asks what is social cohesion and discusses the various views but fails to offer a clear definition. Rather, she describes social cohesion in terms of effects
The complex relationship-based construction of social cohesion is a concept called collective efficacy which describes a community’s ability to create change and exercise informal social control i.e. influence behaviour via social norms. Family, whanau, community, faith, and other organised or non-organised groupings are places where people access social networks, social capital, and social control.
It is clear that social cohesion is seen from a collective perspective. Furthermore it cites with approval the Royal Commission Report into the Mosque Massacre which emphasizes the need for a safe and inclusive New Zealand and that government must lead the conversation. This suggests a top-down directive approach led by the Government rather than the development within the community of common goals and values. To put such power in the hands of the Government ignores the fact that under NZ’s Liberal Democracy the Government is the servant of the people and not the reverse. The paragraph cited states:
Public conversations about embracing diversity and encouraging social cohesion should be led by political leaders and the government. There should be transparent conversations where information is available to everyone. These conversations need to include all communities – across the length and breadth of the country, both rural and urban. Enduring change will take time and investment, so these conversations will need to be ongoing. (The emphasis is mine)
Once social cohesion is discussed Hannah moves into a discussion of colonization and clearly is basing her consideration upon post-colonial theory of which she is an adherent and which I shall discuss in the third part of this study.
She then refers to a graphic which is present on the Royal Society site. This maps the information ecosystem on 2 March 2022
The graphic is explained as follows:
What is revealed here is almost completely bifurcated information sources and expressed sentiments in relation to the police action to close down the protest and the violence and destruction which then ensued. The blue clusters, nodes, and links represent those watching livestreams shared on social media by participants in the protest, and the commentators who positively expressed sentiment towards the protesters over the course of that day. The orange and pink nodes clusters and links are those who watched mainstream media coverage and expressed sentiments of horror, dislike or other negative expression towards the protestors and their actions on that day. Over the course of the 23-day occupation we observed in real time the increasing correlation of consumption of alternative media/social media as news sources, and support for the occupation, culminating in this mapped informationscape, which reveals significant splintered realities….
What we see here then, are the very real effects of social contagion – the construction of social networks formed around disinformation and information voids.
The graphic is interesting and demonstrates an alternative to the “word cloud” earlier used by the Project to give a representation of data. However, the model is unsupported by any reference to data nor is the methodology detailed. It worth is no more and no less than as a picture.
The paper then goes on to cover much of the ground that was dealt with in the “Murmuration” paper discussed previously.
She then moves on to the subject of conspiracy theories. Once again the “Trojan Horse” metaphor is used to suggest the drift from COVID 19 disinformation to its use by far right ideologies
Covid-19 and vaccination have been used as a kind of Trojan Horse for norm-setting and norm-entrenchment of far-right ideologies in Aotearoa New Zealand. Such ideologies include, but are not limited to, ideas about gun control, anti-Māori sentiment, queerphobia, conservative ideals around family and family structure, misogyny, antiimmigration. Mis- and disinformation and ‘dangerous speech’ pose significant threats to social cohesion, freedom of expression, inclusion, and safety.
She then discusses the foundation of conspiracy theories but does so with sweeping generalization and dismissive language
Most significant conspiracies start from a grain of truth – think about the ongoing impact for women’s trust in the healthcare system, instrumentalised by anti-vaccination narratives, which stems from the so-called Unfortunate Experiment at National Women’s Hospital, for example.
Thus the basis for mistrust in the health system seems to have had its origin in what she characterizes as the “so-called Unfortunate Experiment”
This overlooks the fact that the “Unfortunate Experiment” was not only documented but was carefully examined by the Commission of Inquiry chaired by then Judge Sylvia Cartwright. There was no “so-called” about it and by diminishing the significance by using “so-called” of what happened at National Womens Hospital over many years one wonders whether Ms Hannah has a real understanding of fact based history.
Ms Hannah then connects this to anti-vax. I assume that she suggests that distrust in the Health system commenced with the “so-called” Unfortunate Experiment. But anti-vax has been present in one form or another for much longer. What highlighted anti-vax senteiment in the present situation or context is State compulsion backed by mandates and the marginalization of the un-vaxed.
Ms Hannah then goes on to discuss the historical basis for grievances, using the language of post-colonial theory and in her discussion marginalizes any possible opposing point of view
As societies, particularly settler societies like Aotearoa New Zealand –- grapple with our pasts, there is a very real need to talk about harms in the past – for example, the Royal Commission of Inquiry into Abuse in State and Church Care. Trust requires trustworthiness. There is a very real sense that our long-term success in restoring eroded landscapes, scarred by the effects of colonisation is currently being undermined by the short-term impacts of processes which enable reflection, truth and reconciliation.
The networks build on social contagion have provided places for people to locate their personal feelings of loss, of grievance, of uncertainty within a socio-political context that makes sense to them – but sadly works to undermine values of diversity, community, inclusion, and cohesion.
She then moves into possible solutions, idealizing an earlier model of community engagement which is no longer the only model with the onset of Digital Paradigm. In many respects Hannah seems to misunderstand that new information paradigms reshape our understanding and use of information. Hannah seems to wish to maintain the earlier collective consensus model, ignoring the fact that the Digital Paradigm allows everyone a voice rather than just the loudest voice in the Community Hall.
Her views on Freedom of Expression are chilling
How to balance free speech, the right to protest and dissent, the right to hold power to account, within the very real harms that can be perpetuated by those motivated to use these rights to attack the rights of others. How might we understand the differing local and global rights-based frameworks from He Whakaputanga, Te Tiriti, Universal Declaration, BORA – when we know that misuse – from paper terrorism to the chilling effect – is already taking place?
Often the answer is held up as education, civics or media literacy – but this is not enough. There has been a tendency towards complacency in open social democracies – and now it looks like the chickens are coming home to roost. What does shifting towards a more deliberative, discursive democratic idea of the state look like for Aotearoa? What more just and more democratic futures can we imagine? Starting from extending the franchise to those aged 16-18 might be just the ticket, since it is their futures that must be more just and more equal, with more care for each other and our common home.
We know that eroded landscapes and over-consumption of resources lead to the tragedy of the commons, where in a shared resource is no longer able to support the community to whom it belongs. We know that social networks are how we access social capital, form relationships, institute social norms, and implement social control. The norm-shifting we have observed and described over less than a year is testament to the morally motivated network effects online social contagion enabled, wherein death threats, vile crudity, and dangerous speech are now socially normal.
The internet can be a tool that contributes to erosion, but it can also be a tool we turn to build connection, mediate shared values, and post rosters for community gardens. If a socially cohesive society with concern for the common good is likely to be a healthier society; I’ll extend that now to say a socially cohesive society with concern for the common good will also be a society that has transparent and inclusive access to reliable information and knowledges and inclusive access to spaces within which to verify trust in that information, and to negotiate difference and commonality with respect and safety. The effects of the social contagion I have described can only be mitigated through careful mahi – from grassroots to government – that operates to bring information to trusted places and people, and enables communities a pathway back into social cohesion based within te whare tapa wha.
What she is describing – and using a lot of words and convoluted language to do so - is a highly regulated and controlled information ecosystem that does not embrace nor allow a contrarian viewpoint.
Hate speech in Aotearoa New Zealand: Reflecting and resisting – Kayli Taylor
18 June 2022
This article contains a warning and discusses violence against the LGBTQ+ community, including the recent incident against Rainbow Youth in Tauranga. It also discusses violence of a sexual nature, including threats of rape.
This is not an academic study. It does not come out under the logo of Te Punaha Matatini. It is neither footnoted nor referenced. There is no bibliography. What referencing there is links to earlier TDP material. There is no evidence of peer review. There is no evidence that it has been submitted for publication. It is available on TDP website.
At its best it may be described as a commentary on recent events. It may also be described as a polemic.
It demonstrates another example of the shift by the TDP from its original mission to a wider objective of dealing with contrarian discussion on social media platforms.
The paper opens with tracking the history of TDP studies, drawing on earlier discussions of hate speech and dangerous speech and the impact upon LGBTQI+ communities.
The paper does not define hate speech or dangerous speech. Presumably it adopts the definitions in earlier papers but does not explicitly reference those definitions.
Ms Taylor expresses the concerns of TDP as follows:
This is of concern to The Disinformation Project, who have observed the consequences of divisive, denigrating speech on the safety and public engagement of members of minority communities. Hate speech has repeatedly targeted researchers at The Disinformation Project. The rise of dangerous speech and associated frames caused individuals from minority communities, and those targeted, to recoil from public life and engagement. The resulting chilling effects stymie public participation, civic life, political culture, and inclusion – pillars of social cohesion.
By way of comment the question must be asked why should it “be of concern” to the Disinformation Project. There is a clear subjective engagement with the problems under examination and by using this sort of language the Project seems to have assumed a role as arbiter of social norms and “proper expression.”
In addition there is a concern expressed that hate speech has targeted researchers at TDP. These researchers are not identified. The occasions are not stated. The nature of the communications or their number are not detailed. This continues the style of “assertions of fact” adopted by TDP without any evidential underpinning. In addition there is no evidence to suggest that the effect of the communications have had the effect claimed.
The use of assertions unsupported by evidence continues in the next paragraph
Violent, vulgar and vicious discourse against public figures, such as the Prime Minister, MPs who are women, academics, journalists, and other senior public officials continues to grow, with increasingly graphic and violent content. Threats of death, rape, and intense violence are now commonplace – and daily – within the domestic online ecologies studied by The Disinformation Project. Such discourses are intertwined with misogyny, queerphobia, racism, xenophobia, anti-Semitism, and ableism. These discourses focus on, seek to amplify, and entrench differences, rather than notions of cohesion and shared identity. Instead of focussing on what binds us together, mis- and disinformation producers are increasingly highlighting difference, and the antagonistic negotiation of it.
The paper then goes on to link these expressions with a physical real world attack on Rainbow Youth office in Tauranga.
There is no evidential support that online discourse encouraged or led to the attack. There may well be a link but to assert that this is evidence that online speech inevitably leads to kinetic activity cannot be supported nor is such an inference available.
The passage reads as follows:
Hate speech and violent discourses against LGBTQ+ people, which have been growing in the online worlds studied by The Disinformation Project, have already spilt over into kinetic, offline harms, threatening people within these communities. As dangerous and hate speech dominated discourses grow and escalate, The Disinformation Project remains concerned for those who are visible minorities in Aotearoa New Zealand.
Once again the Project casts itself as a player and indeed arbiter in the discussion. The question falls to be answered - Has the Disinformation Project TDP assumed some form of oversight role?
The paper closes with the following
Social cohesion requires trust and cooperation between people with different values and identities. As we move through this period of significant, inter-related and growing challenges, the responsibility is on all of us to listen, seek inclusion, and create space for others. Part of this challenge is also to work to stamp out exclusion, affirm our support for minorities of all kinds, and work together to create an Aotearoa New Zealand we can all be proud, and part of.
The clear implication arising from this is the importance of the collective (social cohesion) and a call to stamp out the kind of communication that threatens it. This carries on the unstated desire by the Disinformation Project to control the content of communication.
Dangerous speech, misogyny, and democracy: A review of the impacts of dangerous speech since the end of the Parliament Protest
Kayli Taylor, Kate Hannah, Dr Sanjana Hattotuwa
22 August 2022
This is a formal paper on the home page rather than the resources section of the Disinformation Project website[18]. It is in pdf format. It does not carry the logo of Te Punaha Matatini.
It is footnoted primarily to online resources. It closes with 6 recommendations for action which in essence crystallize the developing message of TDP. It is not peer reviewed.
It contains a content warning - Explicit language, threats, misogyny, racism, violence.
The paper is lofty in tone. The style of the article suggests that TDP has identified a problem – there can be no discussion about the problem – the TDP analysis of the problem is the correct one – any other approach is wrong – that there are certain steps that must be taken to identify the problem.
The paper starts with an enlarged definition of dangerous speech which is significantly wider than the earlier approach.
The Disinformation Project uses the category ‘dangerous speech’ to categorise material observed and analysed with these ecologies. Susan Benesch’s Dangerous Speech Project asserts that: “dangerous speech is any form of expression (e.g. speech, text or images) that can increase the risk that its audience will condone or participate in violence against members of another group.” The hallmarks of dangerous speech are useful analytical tools for our work also. These include: dehumanisation, accusation in a mirror, threat to group identity or purity, assertion of attack against women and girls, and questioning in-group loyalty. Dangerous speech takes place in a context, is expressed to different audiences in different ways, involves a speaker or narrator, can include the role of a second speaker, and is transmitted via a medium or genre, including the normative generic traits of a social media platform.
This very wide definition has a number of overlays. Some of the terms – “accusation in a mirror” for example – are unexplained and is typical of some of the opaque terminology used by the Project
The paper then goes on to critique existing legal and regulatory remedies
There are issues with regulatory frameworks and legal or civil remedies presently available, which show features of what many call Aotearoa New Zealand’s transparent, high trust model – and what we within the Disinformation Project refer to as our ‘hackable society’- combined with rhetoric and a real sense of grievance can and have led to personal security risks and impacts which asymmetrically effect those targeted who have less or no access to security support, advice, and technologies
The paper then asserts in one paragraph a familiar list of contrarian points of view expressed on a number of different platforms. They point to sovereign citizen and “common law assembly” groups who have called for trials of those who subjected the country to restrictions.
The concern is that beyond public figures there is intimidation of those who do not have resort to security support.
The naming of people creates long-tail risks for a wide range of New Zealanders, many of whom are private citizens. In the context of identification of ‘basic attack’ vectors and risks in the November 2021 and February 2022 Combined Threat Assessment Group (CTAG) reports, we highlight the asymmetrical personal security risk profiles of individuals identified and doxed, who, unlike the Prime Minister, senior government officials, Members of Parliament, or high net worth individuals, do not have security measures to protect them from offline consequences of online harms, but continue to experience sustained and personalised harassment, stalking, and brigading.
The paper then expresses concern about upcoming local body and national elections and the identification of candidates. In addition there is a suggestion that the Companies Register is being used to ascertain addresses.
Prior to the Parliament Protest, we observed the repeated misuse of the Companies Register to target and doxx individuals from relatives of politicians to academics and public servants. Other potential vectors for this kind of misuse use proliferate across government or regulatory norms for publication of addresses, many of which the public are unaware that there are remedies for.
There is no evidence to substantiate this. In addition the articulation is opaque and the grammar. The last sentence should conclude “for which there are remedies.” It is somewhat jarring that writers with the academic qualifications of members of the Disinformation Project are unfamiliar with basic sentence construction.
Concern is expressed at the apparent lack of remedies available and a suggestion that the rules are “hackable” – what ever that may mean. Once again, the term is undefined
Beyond the hackability of our current regulations and legislation, the legal remedies for those who find themselves targeted are limited – to civil proceedings such as the taking out of a restraining order or making an application to the District Court under the Harmful Digital Communications Act (HDCA). Both these approaches make a systemic and social problem personal – a ‘he said, she said, they said’ based on intimate relationship violence or harm.
There is concern at anti-government statements which are described as conspiratorial along with the following
Two considerations stand out, explored in significant detail in our analysis conducted daily over 2022. Firstly, the growing harms against women in public life and public office, with associated chilling effects increasingly sharply, eroding social cohesion by disincentivising political diversity, participation, and public voice. Secondly, the algorithmic amplification of a toxic masculinity that underwrites our socio-political culture and discourse, which targets and harms all women and gender minorities, and especially those who present visibly ‘different’, deemed to be transgressive, or espouse progressive ideas.
There is no referencing for this assertion nor is there any evidential support for the two considerations under discussion.
The Project makes the following comments about Telegram as an example of an unregulated platform or a platform not amenable to regulation under NZ Law.
Telegram, studied as a platform, presents novel challenges. Prior to the pandemic, Telegram was not a key domestic vector of misogynist, mis- and disinformation, and gendered harms.
In April 2022, mis- and disinformation ecologies and constituent enclaves of misogyny, along with public choreographies of harms directed against woman, are key features in Aotearoa New Zealand’s information disorders, propelled by the unregulated nature of Telegram. What is happening now, and at increasing pace, is a migration from the frictionless cross-pollination across ecologies on Telegram, which isn’t based on algorithmic amplification, to the recommendation engines and algorithmic expansion of reach powering Meta product and platform surfaces, like Facebook Pages, Facebook Groups, and Instagram, as well as Twitter, TikTok and YouTube.
Once again there is a complete lack of evidence but a high level of assertion. The comment about an absence of algorithmic amplification seems to contradict the considerations of algorithmic amplification of “toxic masculinity” (undefined).
TDP studies Aotearoa New Zealand’s socio-political discourses as inextricably entwined with the nature and nurture of harms in global disinformation landscapes; requiring critical analysis and responses that locate today’s harms as those evident and for longer elsewhere, allowing us to determine deleterious path-dependencies if unchecked, resulting in more denigration of women and gender minorities, destruction of civic norms, and hollowing out of democratic institutions.
Once again this is a set of conclusions unsupported by evidence.
The next section is headed “Dehumanisation” and commences
Given our unique data collection, analysis, and interpretation, grounded within a year of qualitative and quantitative data, The Disinformation Project now studies a rise in dehumanising discourse on Telegram that recalls the words used before the Rwandan genocide, by Radio Télévision Libre des Mille Collines (RTLM). Named ‘the soundtrack to genocide’, RTLM is known by genocide scholars and journalists for the repetitive description of Tutsis as ‘cockroaches’, drawing upon historic prejudices within the Hutu community.
Once again there is an absence of evidence for what is being said and no disclosure of the methodology of the Project’s “unique data collection”.
Starting late September 2021, The Disinformation Project now studies more than 165 public Telegram channels daily, searching for the usage of keywords and phrases in posts or commentary. This process cannot be automated, is time-consuming, and can be distressing, given the volume of and violence contained in the content studied, including but not limited to memetic, GIF, video and audio material framed by, or featuring these keywords. Data collation was conducted daily from October 2021- 2 March 2022; reverting to every weekday since, with offline context driven analysis conducted over weekends since the end of the parliamentary protest.
This gives an insight into TDP methodologies but the discussion of the methodology is vague. It is described as unique but also, in noting that the process cannot be automated suggests the use of the Mark 1 Human Eyeball. It is nonsense to suggest that the usage of keywords and phrases in posts and commentary cannot be automated. But the suggestion that TDP researchers have to read the material leads into the assertions of traumatization that Dr Hattotuwa and Kate Hannah made on the “Web of Deceit” programme.
There is a non-definitive list of instances of the word ‘cockroach’ from November 2021 to April 2022 appearing in the domestic Telegram ecologies.
It is suggested without analysis that these suggest a dangerous speech discourse and probably within the wider definition preferred by TDP that is correct. It does however ignore the fact that these comments are in the nature of a conversation and expressions of opinion that fall within the scope of freedom of expression – a concept that TDP does not seem to understand or, if it does, recognize.
Comparison is made with the lead-up to the Rwanda genocide. Keywords for searches are mentioned although no context is given and the use of keywords and the references to searches suggests a level of automation which earlier is said to have been unavailable. There seems to be an internal contradiction as to methodology
Dehumanisation by misogyny is also discussed.
The dangerous speech ideation and normative vocabularies present within the ecosystems we study is markedly similar to content produced, signalled, and consumed prior to the enaction of offline violence internationally, such as the murder of MP Jo Cox, as well as other Far-Right and Incel-based violence enacted since, including the 2019 Christchurch Mosque terrorist attack. Material studied by The Disinformation Project daily contains all the hallmarks of dangerous speech, a growing and accepting audience, a context within which racism and misogyny are now normalised and entrenched, and a multi-platform ecosystem which, through the safe harbour of Telegram, provides refuge for hyper-toxic masculinities within which misogyny is mixed with anti-Māori racism, Islamophobia, and anti-Semitism.
Again an absence of clear evidence to support the statement. No evidence linking the keywords with any definable harms. Furthermore Telegram is described as a safe harbour.
The focus of the Project seems to have moved from Mis and Disinformation into the wider area of what it considers to be dangerous speech and clearly is straying into the “hate speech” debate. Certainly this is a significant widening of its original mission statement.
The concluding recommendations require brief comment. There are 6 of them and they read as follows:
1. Immediate review of the electoral legislation candidate disclosure requirements ahead of the 2023 General Election.
2. Expedited review of the regulations for the Companies Register to address discoverability of disclosure requirements and redaction criteria for individuals at risk.
3. Establishment of collective work programme across Privacy Commission and Human Rights Commission on related published registers such as vehicle registration and the Electoral Roll to ensure balance of access to voting rights and protection of privacy.
4. A full review of the existing legal and civil remedies, particularly the Harmful Digital Communications Act (HDCA), Netsafe, and the absence of advocacy as a core deliverable. This review should look at civil and criminal harassment within the context of online, stranger-led hate and harassment.
5. Systems-wide approach to the Content Regulatory Review, including the regulatory frameworks for Netsafe as lead agency for the HDCA and the Domain Name Commission’s regulatory framework for .nz CCLD.
6. The establishment of a transparent, outside government entity to provide research, analysis and advice for communities, civil society organisations, agencies and independent crown authorities on information disorders and their impacts in Aotearoa New Zealand.
Recommendation 6 is clearly a pitch by the Disinformation Project to be the research analysis and advice organization and in some respects this paper is a wider pitch for that objective. The best way to get a job is to suggest that there is a need for it.
The proposal in paragraph 6 would suggest the ultimate appointment of an Information Disorders Tsar which would have serious implications for the freedom of expression, especially if that “Tsar” were to be the Disinformation Project or its members.
There is no discussion in the paper about Netsafe, the HDCA or the Domain Name Commission. Thus the suggestions in clauses 4 and 5 are difficult to fathom and the type of matters the subject of the review are unclear other than addressing civil and criminal harassment which are already the subject of existing law.
As to item 5 it is unclear what is envisaged by a systems wide approach to the Content Regulatory Review which seems to have stalled and is unlikely to be revived before the October election. There seems to be no basis to question Netsafe’s performance as the Approved Agency (wrongly described as “lead agency”) under the HDCA not are there any suggestions for the type of change envisaged for the DNC.
Transgressive transitions; Transphobia, community building, and community bridging within Aotearoa New Zealand’s disinformation ecologies March-April 2023 Kate Hannah, Sanjana Hattotuwa and Kayli Taylor
5 May 2023
After some eight months the Disinformation Project published what it describes as its working paper entitled Transgressive transitions; Transphobia, community building, and community bridging within Aotearoa New Zealand’s disinformation ecologies March-April 2023. As the title suggests and in line with previous papers by the Project it takes a snapshot of developing information over a short period of time surrounding a particular topic.
The paper covers some 42 pages. It is quite detailed and lengthy. To conduct a full analysis would justify a similar paper of equal length. Many of the unsatisfactory approaches of the Project are repeated in this paper. In this commentary I shall highlight what I consider to be some of the more unsatisfactory aspects of the paper.
The paper is described as a working paper. It is not peer reviewed. It is, unlike some earlier papers, more replete with footnotes although some of these are references to their own earlier work. Some of the citations are to Wikipedia which is a good starting point but I would have thought that rather like using ChatGPT to provide a rough outline, it should be a starting point for more detailed research, especially of the literature.
Many of the resources that are cited seem to be from what could be termed the popular media and are clearly slanted. Footnotes 56 – 58 provide examples.[19]
One of the footnotes – footnote 48 – cites “Unpublished situation analysis 22 February 2022”. No reference is made to the author or to the source. I acknowledge as an unpublished paper there can be no journal reference but often unpublished papers find their way to online resources. This citation is most unhelpful, lends no authority to the remarks and is shoddy writing and presentation of sources.
Unlike earlier papers Trangressive transitions has a number of graphs and data references. However, a careful look at this information reveals that it is superficial in its approach and is provided to give the paper a veneer of academic respectability which its does not in fact possess.
The style of the paper is that which we have come to expect from the Project. Lofty and superior in tone, laden with jargon, an often opaque sentence structure that requires more than normal parsing, assertions and unsupported allegations and frequently an absence of evidence beyond “we have studied x y z platforms and have found…”
The study adopts the “snapshot” view of the subject which characterizes the Projects approach to research. It will be noted that this paper deals with a phenomenon that developed over a short period of time. Their preceding papers likewise deal with a very limited time frame within which to gather data.
This may have two consequences:
First, it makes it difficult to determine whether or not the behaviours described are part of a long term trend. Perhaps, if a larger and longer data set gathering period were adopted the matters complained of would be a blip on the radar rather than the serious threat attributed to it by the Project.
Secondly, using data gathered over a short period of time concentrates what the Project perceives is a problem. As suggested in the preceding paragraph, a longer term study may reveal a diminishing problem. It therefore suits the Project’s purpose to gather data over a short period and be selective in the timing of data gathering because it allows them to develop a focus on the data that they want and thus it fits within the narrative that they want to project.
Thirdly, by referring only to the platforms as sources of data and in failing to provide any means by which the data may be examined, verified or interpreted the Project holds itself out as the sole arbiter of the voluime of the content and of the judgment of its quality. This means that its findings cannot be debated nor contradicted on the evidence.
The Project seems to want to find data which supports a pre-determined and foregone conclusion, and making the data fit. Apart from the issue of intellectual rigour the limited time frame for gathering data must give one cause for pause before unreservedly accepting the Project’s conclusions.
The Genocide Issue
The focus in this paper deals with the rise in what it describes as “transphobia” and in some ways amplifies the hyperbolic remarks of Dr Sanjawa Hattotuwa following the “Posie Parker” incident where he suggested that the language and tone of some of the posts suggested that trans people should not be allowed to exist and that therefore this was akin to genocide – a silly thing to say but surprisingly in Transgressive Transitions the Project doubles down on Dr. Hattotuwa’s emotional and exaggerated comment.
The Project goes so far as to say that denying a group’s existence and their right to exist are both signs of the language of genocide. It supports this assertion by citing an international authority – the Lemkin Institute for Genocide Prevention. The Project states:
The Lemkin Institute for Genocide Prevention has described the international ‘gender critical movement’ as genocidal: “ the gender critical movement simultaneously denies that transgender identity is real and seeks to eradicate it completely from society.” In this international context what took place over 26-31 March in Aotearoa New Zealand’s disinformation communities is what we now describe as multipolar community bridging, in which a fringe group with extant ideology was one of several narrators or groups who opportunistically responded to the spike in content about and engagement with anti-transgender content.
It cites as its authority The Lemkin Institute statement on the Genocidal Nature of the Gender Critical Movement’s Ideology and Practice, November 29, 2022. Accessed May 1 2023 https://www.lemkininstitute.com/statements-new-page/statement-on-the-genocidal-nature-ofthe-gender-critical-movement%E2%80%99s-ideology-and-practice Accessed 18 April 2023. However, the URL provided is dead.
What is of concern is that the Project uses this one citation to justify its assertion of genocide or genocidal language but does nothing to further develop the argument. It does not cite any other sources which might lend weight or authority to the Lemkin Institute’s assertion.
At page 36 the Project ominously claims:
With resultant impact on social cohesion, multipolar community bridging dynamics, inextricably entwined with collective narcissism, are now resulting in the normalisation of narrative frames which deny transgender identity and existence. These are genocidal frames: “there is no shutting the floodgates once states and societies acquiesce to the eradication of a specific people from the earth”
Two things arise from this.
The first is the suggestion that multipolar community dynamics is driving the denial of transgender identity. I will discuss the community bridging theory shortly, but it seems that the whole approach of the Project in its paper relies upon this element. The citation once again is to a Lemkin Institute paper https://www.lemkininstitute.com/statements-new-page/statement-on-the-genocidal-nature-of-the-gender-critical-movement%E2%80%99s-ideology-and-practice and the link on this occasion is not dead. The earlier link was dead because it was incorrectly cited. A hyphen was missed out. I would have thought that proper proofing would have picked this up.
The second thing arising is once again the use of the emotive term “genocide.” This reliance upon one citation to justify a claim as serious as an allegation of genocide must be of concern and would suggest a level of emotiveness and hyperbole that I would not expect from an objective academic analysis.
What all this means in effect is that the Project is claiming that if one disagrees with gender ideology or advocate for women's rights, one holds genocidal views and is attempting to eradicate gender dysphoric people from the face of the earth. This merely adds to the polarization of opinion that the Project seeks to reduce.
Continuing on the genocide theme the Project’s paper has a section entitled “Telegram and the networked normative discourse of genocide”. Unusually for the project this section actually contains some examples of the content studied but sadly does not provide any means by which the data may be verified.
For example:
From 26 – 31 March, on both Twitter, and Telegram, the white nationalist group Action Zealandia published the greatest number of updates we have studied in any comparable period. For example, over February 2022, at the height of the Parliament Protest, Action Zealandia published 52 posts – the most number since the start of their Telegram channel. In comparison, by 29 March, Action Zealandia had posted 44 times to its Telegram channel. 42 of those 44 posts were from 19 March onwards.[20]
This is followed by 2 graphs which provide a visual representation of Action Zealandia’s posts. No other data is provided.
What is interesting is that in the discussion on genocide the Project seems to lose the thread of their argument and move to discuss community bridging. The term is used to describe the situation where proponents of one set of ideas are “bridged” into a community that does not hold those ideas.
This is achieved through stories which connect the new idea to shared ideas within the community being bridged into. In the context of the anti-trans messaging topics relating to children were the bridging mechanisms which then created a bonding effect around the idea of threats to women and children.
Community Bridging
Community bridging is a new term in the Disinformation Project’s lexicon. It appears to this commentator that the term was developed to justify the examination of material that went beyond the COVID 19 mis and disinformation that was the original raison d’etre for the Project.
The Project argues that community bridging has an immediate effect was norm-setting, where the type of content normative on Telegram rapidly migrated to more mainstream platforms, including Twitter. The longer-term effect is multipolar community bridging, in which through their engagement with others on this theme, although that said it should be remembered that the Project expressly excluded data from Twitter in its study.
The term is used
“to describe the pivot from anti-mandate/anti-vaccination narratives to transphobic narratives as organisational and motivational network meta structures within disinformation communities, and the ways by which this pivot is providing a core sense of narrative belonging, just as anti-mandate narratives did. This pivot was signalled in our earlier work where we described a series of ‘Trojan Horse’ narratives.[21]
It should be noted that community bridging is a term that has a theoretical meaning. The Project observes
community bridging and bonding - features of decentralised, social media networks, as theorised by American political scientist Robert Putnam. Based on research establishing Putnam’s theorisation in contemporary social media networks, we understand bonding social capital not only acting as “a social glue, building trust, and norms within groups but also potentially increasing intolerance and distrust of out-group members.” The same research established that, in social media networks, “bridging social capital exists in the ties that link otherwise separate, often heterogeneous, groups—so, for example, individuals with ties to other groups, messengers, or more generically the notion of brokers. Bridging social capital allows different groups to share and exchange information, resources, and help coordinate action across diverse interests.”
This theoretical frame describes the large, and interconnected network of communities and individuals who form The Disinformation Project’s location of research.[22] (My emphasis)
The first observation that should be made is that Putnam’s book was entitled Bowling Alone: The Collapse and Revival of American Community[23]. It was published in 2000 which was some time before social media was developed.
It demonstrated that American civic engagement had deteriorated but that a civic reinvention was not beyond the bounds of possibility. Putnam suggested that the term “social capital” was the fabric of our connections with one another. This had deteriorated. It is something of a leap to suggest that Putnam was theorizing about decentralized social media networks.
Putnam’s theory was extended to social networks in a 2014 study.[24] That study provided the first evidence that online networks are able to produce the structural features of social capital. In the case of bonding social capital, online ties were more effective in forming close networks than theory predicts. There were caveats in that bridging social capital may require the presence of organisations or professional brokers.
What is important in this theory – for theory it is – is that it relies on the concept of social capital. The Project uses the term “bridging” – unsupported in the literature – to extend to the development of community sentiments rather than the social capital envisaged by Putnam.
We must therefore remember that the matter of community bridging to explain the migration of communicators from topic to topic in online discussions is a theory only, but the Project seems to use it to dress up its rationale for shifting its focus to hateful and dangerous speech and away from mis and disinformation.
The Project suggests that the same actors who spread what they describe as mis and disinformation transitioned into the arena of hateful and dangerous speech. I say “suggests” because there is no proof that this was in fact so. The Project is unable to say, with the exception of Action Zealandia who provides a useful example for the support of a wider right-wing or ne-Nazi hypothesis, precisely which of the mis and disinformation actors actually participated in this shift. If they were in fact able to offer some hard evidence then they should do so. Lack of evidential support does not assist either credibility nor reliability.
Emotional Engagement
Once again the Project finds itself emotionally engaging with its subject matter
“Our critical appreciation of this content - as that which strategically induces disgust, and revulsion – is analysed in the context of in light of research”[25]
18 March- mid April memes, GIFs and stickers on domestic Telegram channel and account constellations were hyper-fixated on the transgender community and identity.
Many were too violent, and feature harms too graphic, to feature in this working paper.[26]
The Project, like Dylan Clarke in his book Fear, has written itself into the story and demonstrates a lack of objectivity. It has come to a conclusion unsupported by evidence and in the conclusion makes the following statement which not only betrays the level of emotional engagement with the topic but also justifies their shift from dealing with COVID-19 mis and disinformation into the wider realm of unacceptable speech and opinion. Magisterially, the Project states
We now bear witness in near real time to two varieties of social media network content diffusion– content-based, and reference-based. The Disinformation Project’s analysis of explicit violent extremist content, imported into domestic Telegram ecologies in particular, with diffusion linked to bridge accounts, is hitherto unprecedented. Similarly, we now study a hybridisation of conspiratorial narratives, where once anti-vaccine, anti-mandate, anti-government, and anti-authority spaces defined by content produced in large part domestically, are now bridged with content imported from violent extremism safe-harbours on Telegram present outside the country and diffused at pace once in domestic ecologies – to the extent that is observable through open-source analysis. This open-source qualification is critical since what we can observe is at best a conservative capture of what is likely taking place in other physical, digital, and personal spaces and modes of communication.
The Neo-Nazi Fear and Foreign Intervention
Neo-Nazi and far-right content and narrators rapidly emerged as the dominant narrative signature across domestic Telegram discourse – with content imported from foreign neo-Nazi channels at a pace never before studied, which we can state, with a high level of confidence, did not previously exist to this extent within our location of study.[27]
None of these narrators are identified and as such this can only be treated as an assertion or at best an inference.
The theory of community bridging explains how it is that Far Right and Neo-Nazi ideologies have intruded into this space. The Project asserts
Far Right and Neo-Nazi accounts and ideologies have been community bridged into channels and groups in domestic Telegram ecologies which were through nomenclature and original intent (i.e., anti-mandate, anti-government) previously further distanced from neo-Nazi content and accounts on Telegram. The domestic disinformation landscape is rapidly becoming increasingly complex, with emergent signals of further foreign influence campaigns and, importantly, emergent signals of wholesale bridging into and from ideologically motivated violent extremist ideologies. This is not a surprising development: it follows path dependencies flagged in our analysis going back to early 2022, soon after the Parliament Protest.
The Parker tour of Australia and New Zealand accelerated these trends, with the highly compelling transphobic narrative turn providing a new organisational and motivational network meta structure that supports a core sense of narrative belonging. This was amplified across all social media ecologies studied by a ‘collective narcissism’. Understood as “an exaggerated view of the importance, or ‘greatness,’ of the group to which the individual belongs” – and in the context of this working paper, defined as a cisgender or cissexual identity by accounts studied – collective narcissism “defines itself through its relationship to others—needing external validation and/or an external enemy”
There is no citation to justify the term collective narcissism – which itself seems to be a contradiction in terms – other than the Project’s own definition which is unsupported and carries on the practice of fact by assertion that is by now a familiar aspect of the Project’s approach.
Online Tools
Once again the Project relies on the questionable graphic of “word clouds” as a way of identifying posts with which it was concerned. It also commented on the use of memes, gifs and stickers to encode and propagate messages. Stickers are freely available on Telegram and examples appear in the appendix to the paper. The problem is that these are hard to track. The Project observes
Memes and GIFs serve as structural tools which helps producers and audiences to maintain plausible deniability of any responsibility around offline, kinetic consequences of online radicalisation, and instigation of hate. As tactics of malign creativity, sharing and re-sharing GIFs, stickers, and memes formulate a norm-setting exercise, expanding “the so-called Overton window of acceptable political discourse.”[28]
Methodology
The Disinformation Project studies, at present, 126 Facebook Pages associated with disinformation narrative production and promotion. For comparison purposes, we study 84 Pages from mainstream media, covering print, broadcast, and web platforms. These account ecologies on Meta reflect our location of study as one which covers both mainstream and sub-mainstream open-source social media, originating from accounts, pages, groups, or channels which over 2020-2021 were focused on promoting Covid-19 minimisation or denialism. They now represent a wide-ranging set of interrelated conspiratorial ideologies[29]
None of these sources are identified.
The methodology described by the Project follows that set out in earlier papers. Their definitions provide
“framing tools through which we code and analyse material, provenance, propagation, engagement, and potential offline impacts. In this paper and our other work, we refer to these interrelated,socio-technological, and inextricably entwined phenomena as mis- and disinformation, with the resulting impact on socio-political landscapes as information disorders. Mis- and disinformation are transmitted within and across platforms to far-reaching audiences. Producers of mis- and disinformation are often closely connected, or act in concert, cross-promoting material, and content from common sources to reach wider audiences. We describe these complex phenomena as ‘ecologies’ – systems and networks that mirror and migrate content, discourses, language, beliefs, perceptions, and values across different platforms to audiences.[30]
They assert that they gather, collate, and analyse date from Telegram, public Facebook Pages, and Groups, public posts on Facebook, Instagram, Twitter, YouTube, and any sign-posted content on.nz ccTLD sites, other websites, and content on platforms like Rumble, Odysee, Gab, and Gettr but beyond this are opaque as to the identity or location of their sources, thereby making it impossible to validate their research and therefore their findings.
Specifically Twitter is excluded from consideration, observing that Twitter amplifies much if the material seen on Telegram but conveniently the Project claims that their ability to comprehensively map, and study Twitter trends at the time of Posie Parker’s visit was hampered by the revision of Twitter public, and academic API access. However, with a casual wave the Project rejects Twitter saying deterioration of trust and safety on Twitter is well-documented, resulting in the dramatic increase of harms against, amongst others, transgender communities.
The Project at page 7 actually identifies one of the actors with which it is concerned, pointing out that Action Zealandia had returned to Twitter.
They conclude their discussion (and dismissal) of Twitter by observing
“our observations on Twitter’s deleterious role, and relevance in Aotearoa New Zealand’s disinformation ecologies after the platform’s acquisition by Elon Musk supports research which found that “hateful users have become more hateful”, and that “hate has increased overall”[31]
This after a statement that Twitter was excluded from consideration.
Pages 26 -34 contain a considerable amount of detail including graphs which traces social media activity over the period of the study. It tracks in particular what it characterizes as New Zealand Disinformation and material from protest figureheads. In some of the graphs mainstream media activity is tracked. It uses this data to justify what it calls “The Parker Effect.”
It claims:
The Parker Effect – again, as was the case for production and consumption during the Parliament Protest – is linked to opportunistic gains in audience capture and retention by disinformation producers on Facebook and Instagram, and across other platforms, including YouTube, Twitter, and Telegram.[32]
One thing that concerns me in the analysis of the data is that none of the actors in the distribution of this material are named with the exception of Action Zealandia and a mysterious figure named Thomas Sewell who, I understand, is an Australian and founder of a now defunct Far Right White Nationalist group.
An oblique reference is made to those who were involved in the Parliament protest. The Project notes:
Within the 126 Facebook Pages and 77 Facebook Groups which form part of our location of study, we continue to collate date from, and analyse 15 Facebook pages linked to or administered by the Parliament Protest’s key figureheads, and factions, as described by Toby Manhire, Editor-at-large at The Spinoff. Analysis of these narrators’ contributions to content over the period of review reveals interesting emergent patterns, including a clear spike in content production, engagement, and sharing on 25 March, the day of the rally and counterprotest, that we term the ‘Parker Effect’.[33]
None of these sources are named. Some of them identified by Toby Manhire are individuals, some are organisations. They have been referred to as “The Disinformation Dozen” and this will be discussed in more detail in Part 3 of this study. Suffice to say the reader is completely in the dark as to which actor was responsible for what content.
One thing that puzzles me with the Project’s approach to data is that we do not have any indication of the proportion of total traffic represented by the content of which they complain. Given the amount of data and messaging that takes place online it may turn out the percentage of that content that actually falls within the ambit of mis or disinformation is very low indeed and represents a fringe of opinion unknown to most online users. As it stands the image put across by the Project is that the online ecology is flooded with mis and disinformation and is largely controlled by Far Right and Neo Nazi ideologies. This cannot be supported and certainly not from the paucity of evidence provided.
I emphasise that the data for the Project’s assertions was acquired over a narrow timeframe and can hardly be used to as an indicator of a trend.
Concluding Remarks
In its August 2022 paper the Project made 6 recommendations. I considered one of those recommendations to be a pitch for further funding of the Project to continue their studies in this space.
They make a similar comment in the conclusion to their paper:
Our analysis in this working paper supports the need to continue to focus on communities and contexts in the study of conspiratorialism, and disinformation in landscapes featuring algorithmic amplification, as well as Telegram’s non-algorithmic, trans-national, rapid, and sustained cross-pollination of false and/or harmful content: “false news spreads more than the truth because humans, not robots, are more likely to spread it.”[34]
The Project has adopted an extreme position in developing its argument that community bridging explains the shift from COVID Disinformation to transphobia, misogyny, rage and the language of genocide.
It claims:
The narrators of the Parliament Protest, having established themselves as trusted sources regarding anti-vaccine, anti-mandate, and anti-state narratives which were the main motivating and organizing issue in 2021 and 2022, can now successfully bridge their audiences into another almost universally accepted motivating narrative, and in doing so, further their impact, reach, and status.
Narrative frames which surround the Parker Effect within disinformation ecologies show how transphobia, like anti-vaccine sentiment and Covid-19 denial or minimisation, is now a key theme around which identity formation, community, and normative values are constructed and signalled.
Building from pre-existing sentiments which focus on vaccination as a tool of genocide, the transphobic narrative turn does not need to establish grounds for hate, incivility, vulgarity, racism, misogyny, rage, and the language of genocidality; instead, this turn can build on that which has strong foundations, including the indicators of widespread use of the language and visual imagery of genocide.[35]
The Transgressive Transitions paper, like others put out by the Project, is about the demonization of dissent. It elevates any form of dissent into dangerous speech representative of ideologies that the Project considers dangerous to social cohesion. It uses emotive words such as transphobia, misogyny, racism as well as incivility and vulgarity – the latter two may be impolite but a threat to the fabric of social order? I think not.
The Project relies on the critical theory approach which Ms Hannah has endorsed in her academic credo – it attempts to identify a disempowered group – the trans community – and an empowered group – the “cis” community and pits them against one another. It generalizes any adverse commentary about the trans community as a threat to such a level that it is genocidal. The Disinformation Project is really calling out those who dare voice opinions that disagree with the prevailing narrative of those who protested Posie Parker’s presence at Albert Park.
Suppose, for example, that someone believes that physical characteristics are what makes them a 'man' or a 'woman'. They are now demonised as a far-right, Nazi-abetting conspiracy theorist. And God forbid they also share concerns about structural changes to our democracy, traditional views about family, or have any questions about the COVID response.
These views must be silenced, according to the Project. Those who express such views are a threat to society and to social cohesion. The Project wishes to see beliefs with which they disagree and which they demonise – claiming that disagreement as Far Right or Neo Nazi – eliminated with the force of the State.
In such scaremongering the Disinformation Project does a disservice to democracy and the identification of true lies and disinformation. By stereotyping dissent as a disinformation-deceived conspiracy fascist provides a false legitimacy for the Project. A careful examination of their pronouncements and materials suggests that anything that they advance should be viewed critically and carefully before it is accepted.
I shall continue the consideration of the “Public Face” of the project in section 2b – The Disinformation Project and the Mainstream Media.
[1] https://thedisinfoproject.org/wp-content/uploads/2022/05/tdp-2020-paper.pdf (Last accessed 4 May 2023)
[2] https://www.dpmc.govt.nz/sites/default/files/2021-10/Panel%204%20-%20Kate%20Hannah.pdf (Last accessed 4 May 2023)
[3] https://www.dpmc.govt.nz/sites/default/files/2021-10/Panel%204%20-%20Sanjana%20Hattotuwa.pdf (Last accessed 4 May 2023)
[4] https://thedisinfoproject.org/wp-content/uploads/2022/04/2021-11-09-FINAL-working-paper-disinformation..pdf (last accessed 4 May 2023) A summary document can be found here https://thedisinfoproject.org/wp-content/uploads/2022/04/2021-11-Understanding-mis-and-disinformation.pdf (Last accessed 4 May 2023)
[5] https://thedisinfoproject.org/wp-content/uploads/2022/05/The-murmuration-of-information-disorders-May-2022-Report-FULL-VERSION.pdf (Last accessed 4 May 2023) An overview document can be found here https://thedisinfoproject.org/wp-content/uploads/2022/05/The-murmuration-of-information-disorders-May-2022-Report-SHORT-VERSION.pdf (Last accessed 4 May 2023)
[6] https://www.royalsociety.org.nz/what-we-do/our-expert-advice/speakers-science-forum/speakers-science-forum-2022/speakers-science-forum-misinformation/ (Last accessed 4 May 2023)
[7] https://thedisinfoproject.org/2022/06/18/hate-speech-in-aotearoa-new-zealand-reflecting-and-resisting/#more-391 (Last accessed 4 May 2023)
[8] https://thedisinfoproject.org/2022/07/20/eroded-information-ecologies-social-cohesion-trust-and-the-impact-of-misinformation/ (Last accessed 4 May 2023)
[9] https://thedisinfoproject.org/wp-content/uploads/2022/11/Dangerous-speech-misogyny-and-democracy.pdf (last accessed 4 May 2023)
[10] https://thedisinfoproject.org/wp-content/uploads/2023/05/Transgressive-Transitions.pdf (Last accessed 6 May 2023)
[11] https://doi.org/10.24135/tekaharoa.v15i1.399
[12] This is an example of the opaque language or technobabble that appears in Project papers – as I understand the term it is a mathematical process consisting of the expression of the state of a system, process or model as a function of some independent qualities which are called parameters.
[13] It is difficult to see how online data could be studied other than in situ. This form of articulation is typical of the opaque and often convoluted style of exposition adopted by the Disinformation Project.
[14] See The Dangerous Speech Project, Dangerous Speech: A Practical Guide: 19 April 2021 https://dangerousspeech.org/guide/
[15] David Harvey “Dangerous Speech – Some Legislative Proposals” https://theitcountreyjustice.wordpress.com/2020/11/29/dangerous-speech-some-legislative-proposals/ (Last accessed 24 April 2023)
[16] At pge 3
[17] Toby Manhire “Figureheads and Factions: the key people at the parliament occupation” The Spinoff 18 February 2022 https://thespinoff.co.nz/politics/18-02-2022/figureheads-and-factions-the-key-people-at-parliament-occupation (Last accessed 4 May 2023)
[18] The Transgressive Transition paper – to be discussed next – is not available on the Resources paper of the Project’s website but on iys homepage.
[19] 56 Can The ‘Alt-Right’ Distance Itself From Neo-Nazis? https://forward.com/news/348366/can-the-alt-right-distance-itself-fromneo-nazis/ Accessed 18 April 2023.
57 The Dangers of White Nationalism: The Rise and Normalization of Online Racism in the United States,
https://georgetownsecuritystudiesreview.org/2016/12/03/the-dangers-of-white-nationalism-the-rise-and-normalization-of-onlineracism-in-the-united-states/ Accessed 18 April 2023.
58 How the Christchurch shooter used memes to spread hate, https://www.vox.com/culture/2019/3/16/18266930/christchurchshooter-manifesto-memes-subscribe-to-pewdiepie Accessed 18/04/2023.
[20] Transgressive Transitions pge 9
[21] Transgressive Transitions pge 4. The “Trojan Horse” term is footnoted with a reference to the projects own earlier paper Mis- and Disinformation in Aotearoa New Zealand from 17 August to 5 November 2021, Kate Hannah, Sanjana Hattotuwa, and Kayli Taylor, https://thedisinfoproject.org/wp-content/uploads/2022/04/2021-11-Understanding-mis-and-disinformation.pdf
[22] Transgressive Transitions pge 3
[23] New York, Simon & Schuster, 2000
[24] Appel L., Dadlani P., Dwyer M., Hampton K., Kitzie V., Matni Z. A., . . . Teodoro R. (2014). Testing the validity of social capital measures in the study of information and communication technologies. Information, Communication & Society, 17, 398-416.
[25] Trangressive Transitions pge 17
[26] Transgressive Transitions pge 17
[27] Tranhsgressive Transitions pge 11
[28] Transgressive Transitions pge 17
[29] Trangressive Transitions pge 19
[30] Transgressive Transition pge 6
[31] Transgressive Transitions pge 8. The source cited is https://scienceblog.com/537496/new-twitter-now-with-more-hate/ (Last accessed 5 May 2023)
[32] Transgressive Transitions p. 30
[33] Transgressive Transitions pge 27
[34] Transgressive Transitions pge 36
[35] Transgressive Transitions pge 30
Wow. What an amazing effort. Thank you sir.
Might I dare to make the observation that perhaps your statement:
"The circadian rhythm metaphor could possibly be substituted with “chronological sequence” which would achieve the same descriptive result but with greater clarity."
somewhat missed the point.
I don't think they are trying to make their language clear. I think they are attempting to give their work a veneer of science, particularly the "softer" science of biology. The top of their webpage describes "information disorder ecologies"....ecologies, circadian rhythms, their use of ridiculous word "graphs" I see these all as attempts to drape the veil of science over their work.
Rather than understanding that technical descriptions and precise wording and graphs are ways to accurately and precisely explain a scientific position ie a byproduct of the scientific method, they seem to think that use of such language IS the scientific method.
They use terms like informational ecologies because they think it makes there work scientific. They use words graphs because they understand that science uses graphs a lot.....so if they also use graphs then there work is scientific.
It is scientific gravy....but without the roast dinner. Or even the plate.
It would be comically childish, if there opinions were not given so much sway in media and by government officials.
Thanks once again an excellent post.
Well done. I came by this through Bryce Edwards' NZ Politics Daily and am very glad that I've learned of your work. Please, sincerely, keep up the objective investigative work that you are producing.