The Disinformation Project and the Evidence - Methodologies, Analysis and Conclusions
Part 3 of my examination of the Disinformation Project
Introduction
In this third and final article about the Disinformation Project there are two sections. The first section considers the methodologies used by the Project to develop the data that underpins the matters contained in their papers. I have relied entirely upon the Project for this detail which I have collated from the various papers available on their website.
The second section contains an analysis of the various issues and themes that arise from the papers and from some of the public pronouncements made by Ms Hannah and Dr Hattotuwa. I start with a discussion of the analytical style that has been employed by the Project and then move on to examine a number of the problems with their approach.
As a result of these issues the question falls to be answered – Can we rely on the Disinformation Project and what if any threats do they pose to the quality of public discourse.
I conclude with some observation wherein I place the Project in a context and offer some suggestions on strategies for assessing and evaluating their output.
The Methodologies
The papers that are available from the Disinformation Project contain an outline of the methods that they use to gather the data that underpins the matters and commentary in their papers. On the whole the methodologies are summarized and are not very detailed. There is no suggestion by the Project that any of their data gathering systems are proprietary, the disclosure of which may be “commercially sensitive” or may compromise any non-disclosure agreements that they may have signed.
A consideration of the methodology is important because it gives the reader a view of the way in which the data has been gathered, allows the reader to assess whether or not the methodology is reliable and can be duplicated and whether the data derived is sufficiently representative to allow some valid theories to be developed and conclusions to be drawn.
In the first paper in the series - Evaluating the infodemic: assessing the prevalence and nature of COVID19 unreliable and untrustworthy information in Aotearoa New Zealand’s social media, January-August 2020 - the methodology is stated as follows:
We assembled a team of computational and data scientists, public understanding of science and technology scholars, and conspiracy theory scholars with diverse disciplinary backgrounds in August 2020 to collate and analyse Aotearoa New Zealand’s publicly available social media, mainstream media, and public discourse from mid-January 2020 to the present. A core team has been monitoring these media sources since the beginning of the infodemic in Aotearoa New Zealand. This collation of 122009 tweets is mostly limited at present to English-language sources; future studies will endeavour to collate more widely in other key languages in use in Aotearoa New Zealand, including Te Reo Māori, a number of Pacific languages, and Mandarin…..
We collected publicly available tweets using the R package rtweet and the Python module Twint. We integrated the dataset with the aggregate information made available daily by FBK CoMuNe lab. The FBK analysis combines computational analysis of Twitter content for sentiment, reliability of sources, and prevalence of accounts classified as bots in the twittersphere. We searched and aggregated tweets using two different methods: we queried for a list of terms (“covid”, “coronavirus”, “virus”, “lockdown”) and their variations, either as hashtags or as words contained in the tweets, and we queried for mention of specific user accounts that have a critical role in the pandemic response (including an extensive list of NZ MPs, health authorities, and science communicators). We collected data geotagged as originating from Aotearoa New Zealand. Where publicly available, narratives were reviewed via other social media platforms including Facebook, Instagram, YouTube and independent websites/blogs. Using FBK CoMuNe lab’s categorisations as a starting point, we merged publicly available lists of news sources, both social and mainstream, identified as unreliable, and we localized that list considering social and mainstream news outlets relevant to Aotearoa New Zealand. We monitored the prevalence of the word “conspiracy” (and “misinformation”, “disinformation”, “debunking”) querying the Global Data on Events, Location and Tone database (GDELT). GDELT is an open-source global database of society, supported by Google’s Jigsaw tool
This is quite a detailed and helpful description of the methodology and a reading of the references (which I have omitted) helps to flesh out some of the detail. It may well be that the methodology described is assisted by the diversity of contributors to the paper.
Ms Kate Hannah’s paper “When worlds collide: addressing harm, hateful and violent extremism, and disinformation in Aotearoa New Zealand” no data gathering methodology is described which could provide a foundation for assertions such as
“Since the beginning of May 2021, we have observed a significant increase in anti-Māori racism, particularly within video-based and text-based data sources. Themes range from the quotidian conspiracy theory of pre-Māori Celtic settlement of Aotearoa, which attempts to displace Māori as indigenous people, to a number of versions of the ‘Great Replacement’ white supremacist narrative, which was prevalent in the language and imagery of the Christchurch attacker. In the specific anti-Māori version of this discourse, we observe frames of Māori (sometimes herein framed as ‘iwi’) domination, ‘separatism’, and ‘apartheid’. These artefacts have included a highly objectionable video which called for a ‘genocide’ against Māori, targeted specific Māori individuals including Members of Parliament, and described access to firearms. Similar patterns are present in our observation of digital and physical discourses surrounding a range of narratives related to trans human rights; the current post-Select Committee consideration of proposed updates to the Births, Deaths, Marriages and Relationship Registration Bill; and discourses surrounding, particularly, transwomen and sport, predominate, and result in hateful and violent targeting of transwomen.”
Dr. Hattottuwa’s paper “Re-imagining responses to extremism: The importance of context, culture and community” suffers from a similar lack. He states:
Data can help show us what’s going on, but not unlike Rorschach blots, resulting visualisations only make sense when read in specific contexts. Words like online extremism and digital world tend to project violence as predominantly determined by digital content. The telos of this gaze - which has served democracies well but is no longer fit for purpose - is to see legislative instruments, laws, the codification of regulations and punitive measures as adequate, desirable or definitive responses for disinformation’s Hydra-headed entrenchment, expanding at pace. Informed by lived experience, activism, and research, I study online data in situ, seeing digital interactions as inextricably entwined with local cultures, histories, communities, media ecologies, political cultures, anxieties and aspirations.
He does not describe how it is that he studies online data in situ, the methods employed, the sites examined, the numbers of posts studied nor does he identify the various themes arising within the dataset acquired. His paper, whilst full of innuendo and suggestion along with some highly subjective commentary, tells us nothing about the validity of the processes he employed to reach his conclusions.
The first full paper written by members of the Disinformation Project is entitled
“Mis- and disinformation in Aotearoa New Zealand from 17 August to 5 November 2021”.
In the introduction it is stated:
“Since February 2020 a small interdisciplinary team, The Disinformation Project, has been observing and analysing open source publicly available data related to Covid-19 mis- and disinformation on social media, mainstream media, and in physical and other digital forms of information and knowledge dissemination. Our project is part of the Aotearoa New Zealand National Centre of Research Excellence for Complexity, Te Pūnaha Matatini. In our work, The Disinformation Project has developed a novel mixed methods approach which combines a range of standard open-source quantitative reporting from social media, media platforms or sources with a rich text and artefactbased narrative analysis of longform qualitative data. From August 2020, our work included the study of mis- and disinformation ecosystems in Aotearoa, including the seed and spread of ‘dangerous speech’, hateful expression, and criminal behaviour. We focus on effects and causes here, but study the global trends, themes, narratives, and actors who influence online harms in Aotearoa.”
The “novel mixed method” is described as follows:
“Our novel approach embraces quantitative measures based on the volume, vectors and velocity of inaccurate content, amplification of mis- and disinformation by groups and individuals, tracking of narratives across online and offline contexts and key distribution signatures. Furthermore, and significantly, harmful content is also qualitatively analysed through gendered, country-specific, and other contextual frames. The reporting is presented in ways which are immediately usable for decision-makers, alongside media commentary on the harms that mis- and disinformation and ‘dangerous speech’ present to social cohesion, freedom of expression, inclusion, and safety.”
Beyond this rather opaque and jargon filled description there is no detail of the precise methodology employed nor of the volume of posts, nor of their content – indeed little to suggest the themes of the content and details of the dates whereupon different types of dialogue appeared. This absence of detail makes any sort of analysis difficult and, as may be expected, refutation difficult.
The paper goes on to make some very general comments about what it calls the Disinformation ecology and once again sets out a highly generalized description of the data without any detail
The Disinformation Project observes a large number of publicly available groups, pages, and accounts within Aotearoa’s disinformation ecology. The platforms we observe include, but are not limited to: Telegram, Facebook Pages, Facebook Groups, Facebook accounts, Instagram, Twitter and any sign-posted, off-platform content harbours, like the .nz top-level domain, other websites and platforms like Rumble, Odysee, Gab, and Gettr. Based on the grounded, daily analysis of very large volumes of data, The Disinformation Project reports on emergent trends, themes and signals within a disinformation landscape that is sophisticated, motivated, adaptive, resilient, increasingly violent and significantly volatile…..
The volume of content studied by The Disinformation Project since 17 August, across all platforms, is significant and cumulatively in the hundreds of millions of data points. Quantitative analysis around the mis- and disinformation volume (amount of content), vectors (platforms and apps content is produced and shared on) and velocity (speed at which content is produced) since mid-August, under the Delta Level 4 lockdowns, is unprecedented. We note that it is by order of magnitude more than the content seed and spread over 2020, and even in the first half of 2021.
There is no detail as to the way in which this data is gathered. The most favourable interpretation is that the methodology employed in the Evaluating the Infodemic paper is identical but no clue is given to support that hypothesis.
The summary paper which accompanies Mis and Disinformation is vague as to methodology. In answer to the self-posed question “How do you study this” a very generalized – dare I say glib – answer is given
“We look at a number of public facing pages and groups across a number of platforms on the internet — including Facebook, Instagram, Twitter, and Telegram. Every day we look at posts and comments. This helps us to understand what is being said and the language used.”
The paper entitled The Murmurtation of Information Disorders takes us no further down the road to determine or understand the methodologies employed. Once again the statement is made the information is gathered from publicly available websites and social media platforms. The definitions settled by the Project provide what they call the framing tools through which they code and analyse material, provenance, propagation, engagement, and potential offline impacts but no further detail is given
Ms Kate Hannah’s paper The Common Good or the Tragedy of the Commons? Social cohesion, trust, and the impact of misinformation suffers from a lack of any real description of data gathering sources and analysis methodologies. This is the paper that contains the interesting “colour burst” graphic but without any detail at all about the way in which data was gathered and analysed. Reference is made to the somewhat opaque metaphor of “circadian rhythms” of content development but there is no expansion of this. I would have thought that a more rigorous approach would have been in order for a Royal Society of New Zealand presentation.
Ms Hannah’s speech Eroded information ecologies: Social cohesion, trust, and the impact of misinformation again there is a lack of detail about data gathering methodologies. I acknowledge that it is unlikely that material would comprise part of a delivered speech but I would have thought that in a published piece some referencing to methodologies or the building an analysis of data sets could have been incorporated.
The penultimate paper available - Dangerous speech, misogyny, and democracy: A review of the impacts of dangerous speech since the end of the Parliament Protest – has no detail of research methodologies but dives straight into a discussion and analysis of content. There is a brief reference to data gatherning which states as follows:
Starting late September 2021, The Disinformation Project now studies more than 165 public Telegram channels daily, searching for the usage of keywords and phrases in posts or commentary. This process cannot be automated, is time-consuming, and can be distressing, given the volume of and violence contained in the content studied, including but not limited to memetic, GIF, video and audio material framed by, or featuring these keywords. Data collation was conducted daily from October 2021- 2 March 2022; reverting to every week day since, with offline context driven analysis conducted over weekends since the end of the parliamentary protest.
As I have noted in Part 2 of this commentary, data is examined using the Mark 1 Human Eyeball, given that the analytical process cannot be automated. As I suggested in Part 2 the use of filtering by keywords and the like could automate the process. This is highlighted later in the paper when there is a description and identification of keywords. Keyword searching however is discounted because disinformation dialogues use derivatives, proximate terms, synonyms, other slang, and more common expletives and keyword analysis significantly and consistently under-represents the actual presence and propagation of dangerous speech, on just domestic anti-vaccine Telegram ecologies. However, beyond these assertions this seems to be an explanation for Disinfomation researchers having to subject themselves to the trauma of reading each and every post.
One example of analysis undertaken was to detail a non-definitive list of instances of the word ‘cockroach’ from November 2021 to April 2022 appearing in the domestic Telegram ecologies studied. This hearkens back to the use of “cockroach” in the lead up to the Rwanda genocide but overlooks the fact that “cockroach” is a world wide term of abuse. Spanish speakers use the term cucuracha to describe a person or a member of a group regarded as undesirable and rapidly procreating. The limiting of the term to the context of the Rwanda genocide overlooks its world wide use as a term of dislike or abuse.
The Dangerous Speech paper differs from its predecessors in that it – for the first time in the papers put out by the Project – identify examples of the dialogue and statements under examination. However, the focus of the paper is not on disinformation as such but upon, as the title suggests, what the Project terms “dangerous speech” and included in this category “dehumanizing” speech and a lengthy discursive discussion of “misogynistic” speech.
Towards the end of the paper it becomes clear that the Project has seen its remit expand considerably from COVID 19 mis and disinformation to a much wider consideration of what it terms “information disorders within ecosystems”. The Project notes that “outside the analytical lenses of national security, a focus on human security and social cohesion reveals the threat this normalised misogyny and misogynoir represent.”
The paper closes, as I observed in Part 2, with a list of recommendations but apart from a description of what the Project deems to be objectionable speech there is no demonstrable need or basis for these recommendations. Once again there is an absence of demonstrable methodology is developing the argument which leads to these conclusions.
To close on this section the evidence establishes that initially a reasonably clear methodology was established when the Project operated within a wider framework. In later papers the research and data gathering methodologies become less clear, more opaque or are absent altogether. The difficulty that this poses is that it is difficult if not impossible to trace the research path back to a beginning, check the methodology and submit the data to a separate form of analysis. Perhaps this explains why none of the Project’s papers have been peer reviewed.
The other difficulty is that in the absence of a clear methodology the Disinformation Project is asking readers and possibly policy makers to “trust us – we know what we are doing.” Evaluating the validity of assertions necessarily means that the evaluator approaches the paper with a healthy degree of skepticism. Nothing should be taken for granted. And if the skeptic cannot be satisfied, the validity of the assertions must necessarily remain debatable.
But methodology is not the only difficulty that a commentator on the Disinformation Project faces. A reading of the papers and the pronouncements of members of the Project reveals some problems with approach to the subject matter which suggests that assertions by the Project should not be unreservedly accepted.
Issues and Themes
The Analytical Style
There is a particular style of presentation and analysis that runs through all the Disinformation Project papers, although it is not so pronounced in the first one which may be because of the diversity of authors. The style becomes clear once we examine those papers that are written by individual members of the Project or by the three principal members of the Project, writing as a group.
A clue to understanding the analytical style comes from what I have described as Kate Hannah’s “academic credo” to which I referred in Part 1 of this study. There is a particular paragraph which, when examined, gives us a penetrating insight into the Project’s analytical style.
Ms Hannah said
When l use theoretical approaches to understand the world, it is my graduate class in theory, taught across the faculty, in which we explored the critical theoretical advances of the twentieth and early twenty-first century; where I first really understood Marxism, feminism, postcolonial theory. These are tools I use everyday in my work, and in the construction of my identity.
Ms Hannah is saying that she uses a critical theory approach in her work. Critical theory is a philosophical and sociological approach to understanding power and inequality in society. It examines, evaluates and critiques binary power dynamics in society and takes a Marxist perspective.
The theory aims to identify, challenge and change what it deems to be oppressive power structures in society. It holds that social hierarchies do not occur naturally but are created and maintained through oppression and domination. The theory relies on identifying a conflict and imbalance between the holders of power – the oppressors – and the victims of power – the oppressed. Marx identified the imbalance as the oppressive aristocratic and bourgeois oppressors and an oppressed proletariat.
Critical theory’s core focus of inquiry is power and how it produces social inequality. It holds that power is unfairly distributed and is wielded by the powerful to maintain their power while oppressing those who are marginalised. Feminism has gone through a number of stages and in the 1980’s critical theory and its methods were used to examine the systematic oppression of women in society.
Critical theory questions dominant cultural narratives by promoting marginalized voices and highlighting their oppression. Ms Hannah’s reference to post-colonial theory exemplifies this. Post-colonialists examine the lasting influences that colonialism has had on societies that were once colonized by another country, identifying colonists as oppressors and the colonized as the oppressed. Finally the critical theorist believes that power structures need to be upended for justice and equality to the achieved.
A graphic example of Ms Hannah’s use of critical theory and the jargon associated with it may be seen in the June 2021 paper When worlds collide: addressing harm, hateful and violent extremism, and disinformation in Aotearoa New Zealand delivered to the hui on Countering Terrorism and Violent Extremism. Despite her disavowal of the use of jargon, the expressions used throughout the paper point to a critical theory approach.
She starts by referring to the “imperial projects” and places reliance upon the Doctrine of Discovery – a highly contestable and indeed invalid conspiracy theory that nevertheless is relied upon by those arguing from a post-colonial or cultural imperialism perspective. She then goes on to suggest that the structures underlying the imperial project are an aspect of life online.
These underlying and often invisiblised structures – imperialism, colonisation, white supremacism, misogyny, Islamophobia, homophobia, antisemitism – also underpin the human and technical mechanisms of life online. From the founding conceits of social media platforms to objectify and humiliate women to the biases of engineers embedded in algorithms, the digital world reflects the structural and systemic violence towards ‘the other’ which forms the basis of the physical world we inhabit.
The online environment is an extension of the conflicts arising from imperialism and embodies many of its evils. Thus the conflict becomes immediately clear. The online platforms and those who use them for perpetrating what Ms Hannah describes as “the invisiblised structures” are the oppressors. The targets of those structures are the oppressed. In that one paragraph we see elements of critical race theory and critical feminist theory.
From this foundation Ms Hannah develops an argument in the paper that morphs these products of the imperial project into online harm, hateful or violent extremism and disinformation which, she claims, are issues shaped by imperialism. She then identifies the oppressed – the victims of online harm -
” Māori, Pasifika diaspora communities, the Muslim community, Chinese diaspora communities, refugee and migrant communities, LGBTQIA+ communities – in particular, trans communities – and peoples living with the experience of disabilities.”
The paper then goes on to discuss in some detail some of the concerns that the Disinformation Project has with discourse online.
Much of the material that is contained in Ms Hannah’s paper is repeated in one form or another in other papers from the Disinformation Project and I do not intend to analyse the critical theory that appears in each of them. Effectively Ms Hannah’s paper is helpful for it is an egregious example of the critical theory approach that underlies her work with the Disinformation Project.
I want to make it clear that Ms Hannah is entitled to her beliefs and is entitled to express her point of view from a critical theory perspective. What her audience needs to understand is what lies beneath the words that she is articulating and what they really mean. In this way they can properly critique what it is that she and the Disinformation Project are publishing and attribute such weight to it as they think fit. But it is clear that Ms Hannah’s approach cannot be viewed uncritically.
Critical theory has a number of problems. It sees power purely in binary terms – an oppressor and an oppressed. It has a very narrow focus and lacks nuance and context. In addition it has a clear political goal and this is obvious both from the Worlds Collide paper and the conclusion to the final paper available on the Disinformation Project website - Dangerous speech, misogyny, and democracy: A review of the impacts of dangerous speech since the end of the Parliament Protest – which exemplifies the fact that critical theory has a clear political goal and thus may be susceptible self-serving bias. It concentrates on finding an uprooting power structures and thus enters its analysis in a highly politicised way rather than utilizing an objective approach.
This leads into another difficulty that lies behind much of what the Project has to say and it is issue of subjectivity.
Subjectivity
In the Worlds Collide paper Ms Hannah acknowledges that her approach is other than objective. She says:
As a researcher, my emotions form an aspect of my ability to ethically engage with the material I study, and to ethically engage with the communities who are most effected by the milieu of that study – the internet. This understanding of emotionality highlights “the ethical obligations of our role as witnesses and storytellers…implicated in the production of meaning through our witnessing, through our storytelling, through the political engagements of our research as it goes into the world”
Thus she sees herself not as an objective observer but as one who, because of her emotional engagement with the material, becomes part of the story. As a result any hope for dispassionate analysis and objective assessment is lost.
Another example of subjectivity may be seen in Dr Hattotuwa’s paper – also presented at the Hui on Countering Terrorism and Violent Extremism Reimagining responses to extremism: The importance of context culture and community.
The title itself indicates that the responses to extremism are inadequate and need to change – the political outcome of a critical theory approach to a problem.
But the title itself indicates a high level of subjectivity – imagination is a highly subjective process and an internal one at that. Although the word may have a dramatic impact it nevertheless carries with it implications of internalization of a problem rather than objectively identifying a problem, why current structures are inadequate and how they should be changed, at the same time balancing the various interests that any competent policy analysis requires.
Like Ms Hannah, Dr Hattotuwa writes himself into the narrative. No dispassionate observer, he.
He says
The perspectives in this policy brief are informed by two inter-related drivers - one, the lived experience of negotiating violent conflict in Sri Lanka since 2002, including responding to online manifestations of offline violence for over a decade…..
Coming from, and calling home a country that is, in every imaginable way and every day, profoundly more violent than Aotearoa, New Zealand in most touch points for citizens, and especially those from minority communities, I viscerally appreciate the symbolic invocations and implications of statements by political entrepreneurs or their proxies….
This perspective, congruent with my own experience and research including representations of violence and prosocial responses on social media in Sri Lanka and Aotearoa, New Zealand, turns on its head current approaches to countering extremism, largely based on enhanced or increased regulation, legal and legislative means.
As I earlier observed, imagining or re-imagining is an internal and subjective process. This the title of the paper betrays the fact that it is not an objective but a highly subjective approach. The structures that he wishes to reimagine are existing regulatory structures that recognize a balancing of a number of competing interests and result in what is largely a compromise solution. By focusing on a problem and suggesting a narrow solution as advanced by himself and Ms Hannah, Dr Hattotuwa is overlooking the intersection of societal interests represented by existing regulatory structures.
If we consider the papers that have been made available by the Disinformation Project the emotional engagement with the subject matter and with the data becomes clear and as the papers progress they become less and less objective – if indeed they ever were – and begin to become polemics.
The emotional engagement, however, is not limited to the papers. Ms Hannah and Dr Hattotuwa made their positions clear when interviewed for the Fire and Fury and Web of Chaos documentaries.
It is clear from the papers and from the methodology discussed that Dr Hattotuwa and Ms Hannah immerse themselves in the vast amount of what comprises misinformation, disinformation and radical extremism online.
Dr Hattotuwa subscribes to 130 Telegram channels and groups. He concedes he does not read everything that comes across his screen. Because of the way he organizes the information, he claims that he gets an insight into the mindset of the people who frequent the channels.
Dr Hattotuwa discussed what he calls toxic information and commentary including material directed about the Prime Minister. What was extraordinary was the suggestion that this toxic informational landscape was being used by 350,000 New Zealanders – all grooming and harvesting. Dr Hattotuwa emphasizes “It is here. It is amongst you” (“Web of Chaos” at 29.30). No evidence is offered to support either the numbers or the assertion.
Ms Hannah expressed concerns about death threats that she received and records the ritualistic washing of hands she undertakes before she examines archival material – a form of symbolic disengagement from reading unpleasant material. She does the same investigating information on the computer.
Dr. Hattotuwa describes how he has two showers a day to symbolically wash away the detritus of the online material he has been viewing. These actions on the part of two individuals who are meant to be carrying out dispassionate and objective research is interesting if only for the level of subjectivity it introduces.
A further example from Dr Hattotuwa may be found in the comments that he made following the visit of Kellie-Jay Keen-Minshull (Posie Parker) in ealt April 2023. In commenting on the content he noted that the extremity of the content was more characteristic of far right and neo-fascist and neo-Nazi groups, and the fact it was now being taken up by groups that flourished because of Covid measures was "really worrying".
He said the vitriol directed at the trans community could be described as "genocidal".
This is an extreme example of hyperbole that I am sure Dr Hattotuwa in retrospect regrets making, but provides an example of the way in which emotional engagement with the topic may lead to exaggerated statements.
Generalisation, Unsupported Assertions, Emotive Language and the importation of feeling
Ms Hannah and Dr Hattotuwa expressed their views in the “Fire and Fury” documentary as well as the “Web of Chaos” documentary. They are entitled to express their views. My suggestion is that those views should be approached with caution – indeed this is a conclusion that I have reached while preparing and researching this study.
Although they may be able to point to evidence of what they describe as mis/disinformation, the way in which they interpret that evidence gives me some cause for concern.
Certainly they are neither dispassionate nor objective about their topic. This is evidenced by the reactions that they have to the content of the material that they view. They clearly are responding subjectively to it. They make value judgements rather than empirical or descriptive ones.
For example, in her discussion about connection between white nationalism and the slide towards extremism Ms Hannah said that an identifier of the groups of which she was critical involved the “advocacy of rights to things like free speech.”
I trust Ms Hannah does not stand by that generalization. The implication is clear. If one is an advocate of rights such as free speech, one is a right-wing extremist, supporting white nationalism or white supremacy.
That conclusion cannot be supported by the facts. Those who advocate liberty are not extremists. Those who advocate freedom of expression are not far-right wing. For example, an examination of the Council of the Free Speech Union reveals some commentators who occupy a position on the Left of the political spectrum.
Ms Hannah’s sweeping generalisation does neither her argument nor her credibility any good. Dr Hattotuwa’s unsupported assertion that 350,000 subscribe to the toxic informational network does little for dispassionate analysis or objectivity.
Indeed, examples such as this cause one to examine with a greater critical lens, the assertions and validity of material that emanates from the Disinformation Project.
In Part 2 of this study I discussed each of the papers put out by the Disinformation Project. By way of comment on some of them I have identified examples of generalisations, unsupported assertion and emotive commentary. I do not intend to repeat them here. I do observe firstly that such elements of style have little place in academic commentary and secondly the fact that there are repeated incidents of this style and approach must once again be placed in the balance to determine what weight should be accorded to material from the Disinformation Project.
Poor research
In my commentary in Part 2 on some of the papers and in the discussion on methodology in this part I have pointed out what I consider to be deficiencies in the research carried out by the Disinformation Project.
I have no doubt that they have immersed themselves in a large quantity of online material. In some cases platforms such as Telegram have been identified. Beyond that, however, there is no information that would allow an independent person to check the data.
The first paper had a satisfactory description of the approach that was adopted to gathering and analysing the data but that level of detail is sadly absent from everything subsequently.
From time to time there are references to literature but apart from the first paper no bibliography is provided nor does there appear to be a proper analysis of contending papers or other literature on the topics covered. In many cases unsupported assertions seem to be the order of the day.
Another difficulty – and I shall address this in the next section – is the lack of identification of actors. In Fear – New Zealand’s Hostile Underworld of Extremists by Byron C Clarke and Histories of Hate edited by Matthew Cunningham, Marinu La Rooij and Paul Spoonley the “bad actors” are identified. Not so with the Disinformation Project.
And this leads in to another problem – there is no discussion of why certain assertions amount to misinformation or disinformation. Rather the comments seem to be categorized or distributed among headings such as conspiracy theories, misogyny, Islamophobia and the like. Apart from the “cockroach” discussion (which takes us nowhere) there are no concrete examples of a statement and why it is misinformation or disinformation – unless in the latter case the Project is nervous about calling a person a liar.
Thus we are faced with an unidentified problem. We do not in fact know for certain what statements are classified by the Project as misinformation, what amounts to Disinformation and what constitutes mal-information. And this goes to the heart of their research and methodology approach.
Propagation of inaccurate data
Since it was set up the Disinformation Project has been the “go to” organization for sound bites or commentary on aspects of nefarious on-line activity. I have given examples of their engagement with two television documentaries above.
But there is another problem and that is that mainstream news media often uncritically accept what the Disinformation Project tells them and report it accordingly without apparently checking and vetting what they have been told. In one case assertions were made which revealed so problems in reporting information sourced from the Project but even more concerning some issues about the Project’s own methodologies.
David Fisher in his article on Chantelle Baker (NZ Herald 9 April 2023) makes the following observation
Chantelle Baker was one of the “Disinformation Dozen” identified by The Disinformation Project, an academic research group that tracks false and misleading claims.
Those dozen alt-news outlets produced 73 per cent of false or misleading New Zealand-based content found on Facebook during the 23-day occupation. On some days, alt-news views equalled or exceeded the views on reality-based media.
Fisher is therefore reporting that the Disinformation Project identified a “Disinformation Dozen” of which Chantelle Baker was one, and that these twelve people were responsible for 73% of allegedly false or misleading content found on Facebook.
Tracking this source down has proven difficult. However, the Disinformation Project has published a paper entitled “The murmuration of information disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest.”
In that paper the authors record Facebook activity over the period of the protest and developed a graph of Facebook interactions. This graph tracked “around a cluster of mis- and disinformation Pages studied, twelve protest figureheads, and mainstream media from 6 February to 3 March. Aotearoa New Zealand’s ‘misinformation dozen’ on Facebook were responsible for a considerable proportion of posts and engagement during the Parliament Protest.”
The paper does not identify these twelve individuals. Furthermore they are referred to not as the “Disinformation Dozen” but rather a “misinformation dozen”. Thus Fisher is in error in citing the Disinformation Project as a source for the “Disinformation Dozen” – in fact the Disinformation Project refers only to a shadowy 12 figures who were involved in the propagation of misinformation via Facebook.
The error was perpetuated by Newshub in a headline “NZ’s ‘disinformation dozen’ drove three-quarters of fake news chatter on final day of Parliament protest” 18 May 2022 https://www.newshub.co.nz/home/new-zealand/2022/05/nz-s-disinformation-dozen-drove-three-quarters-of-fake-news-chatter-on-final-day-of-parliament-protest.html (Accessed 12 April 2023)
A footnote reference for the “twelve protest figureheads” takes the reader to an article by Toby Manhire entitled “Figureheads and Factions: The Key People at the Parliament Occupation” reported in the Spinoff. The link to Manhire’s article provided in the Disinformation Project paper - https://thespinoff.co.nz/politics/18-02-2022/figureheads-and-factions-the-key-people-at-parliamentoccupation, - is dead as a result of the absence of a hyphen.
The correct online citation is https://thespinoff.co.nz/politics/18-02-2022/figureheads-and-factions-the-key-people-at-parliament-occupation
In that article Manhire identifies the following actors:
1. Kelvyn Alp and Hannah Spierer of Counterspin
2. Brian Tamaki and the Freedom and Rights Coalition and Destiny Church
3. Claire Deeks of Voices for Freedom
4. Chantelle Baker
5. Leighton Baker
6. Sue Grey of Outdoors and Freedom Movement
7. Billy Te Kahika of Freedom Alliance NZ
8. New Zealand Doctors Speaking Out with Science
9. Carlene Hereora and the Sovereign Hikoi of Truth
10. Damian de Ment – a QAnon Conspiracy Theorist
11. Karen Brewer described as a volatile Northland-based Australian
12. Brett Powers – a failed local body politican
13. Liz Gunn – a former broadcaster
14. Matt King – former MP for Northland
15. Action Zealandia – a far-right white supremacist group
16. Philip Arps – a convicted white supremacist
Thus we can see that there are 16 people and associated groups and it is difficult to determine who of the 16 comprise the “misinformation dozen” referred to by the Disinformation Project. Their mere citation of Manhire’s article is insufficient.
Nowhere does Manhire identify a “misinformation dozen” and I suggest that The Disinformation Project could well have difficulty in ascribing to these actors 73% of the Facebook posts of which they complain.
A more rigorous approach would have been to identify the twelve protest figureheads but, apart from the reference to Manhire’s article there is no information available which would allow the reader to verify the claims by the authors.
The problem becomes further confounded when David Fisher misattributes the term “Disinformation Dozen” to the twelve figureheads referred to in the “Murmuration” article. He should be well aware of the difference between misinformation and disinformation, especially given that his misattribution is misinformation itself.
Or perhaps Fisher is conflating the twelve protest figureheads with the Disinformation Dozen identified by the Centre for Countering Digital Hate in a paper entitled “The Disinformation Dozen” published 24 March 2021. The “Disinformation Dozen identified in that paper are:
1. Joseph Mercola
2. Robert F. Kennedy, Jr.
3. Ty and Charlene Bollinger
4. Sherri Tenpenny
5. Rizza Islam
6. Rashid Buttar
7. Erin Elizabeth
8. Sayer Ji
9. Kelly Brogan
10. Christiane Northrup
11. Ben Tapper
12. Kevin Jenkins
The paper states that an analysis of over 812,000 posts extracted from Facebook and Twitter between 1 February and 16 March 2021 shows that 65% of anti-vaccine content is attributable to the Disinformation Dozen. Over a two month period analysis of anti-vaccine content posted to Facebook over 689,000 times shows that up to 73% of that content originated with members of the Disinformation Dozen of leading online anti-vaxxers.
In what must be a most remarkable co-incidence at Page 9 of the “Murmuration” paper the following statistic appears
“On 2 March alone, 73% of interactions in the mis- and disinformation ecology were generated by just a dozen accounts.”
It is astonishing that exactly the same percentage of actions in the mis and dis-information space were generated by 12 “spreaders” within a 2 month period in 2021 and then again on 2 March 2022.
Furthermore, the Center for Countering Digital Hate “The Disinformation Dozen” site does not appear in the “Murmuration” paper references or end notes.
Once again the coincidence of 12 spreaders of mis and disinformation both in the USA and in New Zealand is quite remarkable.
What all of this establishes is that there are coincidences that the Disinformation Project needs to explain – is “Murmuration” the product of shoddy research and analysis which should have been picked up on a peer review – or is it truly a most amazing accident of fate – a truly extraordinary co-incidence!
And furthermore, a journalist doing an in-depth piece on an alleged conspiracy theorist like Chantelle Baker needs to be absolutely clear in his terms and sources. The Disinformation Project has not directly identified Chantelle Baker as a member of the Disinformation Dozen, nor has it referred to a group by that name in its paper. Reference to Ms Baker has been indirect only through the medium of Toby Manhire’s article.
Furthermore, given that the good journalistic practice of seeking comment from the subject of the article does not appear to have been followed, one must wonder whether there are other errors that may have crept into the piece. I would have thought that a journalist of Fisher’s experience and standing at least would have sought an interview with Ms Baker. The article is silent on whether he did or did not. If he did, and she turned him down, I am sure he would have said so. There is no evidence that he did seek an interview in which she participated. I can only conclude, by a process of elimination, that he did not.
This account demonstrates the need for a critical analysis of material that may come from the Disinformation Project.
Can we rely on the Disinformation Project?
The short answer to the question is a qualified one – not entirely.
The reason that I say that is that the Project views its work through the rather binary lens of critical theory and in so doing avoids the fact that there are other perceptions and interpretations of the information that is available online. It does no good to raise the fear of a destruction of social cohesion without recognizing that in a diverse society there is a diversity of opinion.
Often that diversity of opinion will challenge established thought or principles. We only have to look at history to see the challenges posed by Protestantism to the Catholic Church, by the Puritans to the Church of England and by the Dissenters to the Establishment in the England of the 1790’s. There are other and more recent examples.
Contrary opinions and alternative interpretations of events or facts are part of the stuff of the chaos that is a democratic society and should be recognized as such. As Pilate said in “Jesus Christ Superstar” “And what is 'truth'? Is truth unchanging law? We both have truths. Are mine the same as yours?”
Thus when we consider the pronouncements of the Disinformation Project we must remember that they approach their topic from a particular viewpoint which may not be acceptable to all, that their assertions should be carefully examined for evidential support, that their methodology should be examined if indeed it is described, that their data sets cannot be accepted without qualification in the absence of evidential support, that an effort must be taken to separate the emotion from the argument and seek an objective path – which may be difficult – and that often their commentaries are insufficiently referenced.
Threats to the Quality of Public Discourse
There is no doubt that the Disinformation Project sees a greater role for the State in the control of information. This becomes clear from their early papers and is a theme that is carried through into the final paper on their website.
In his paper Reimagining responses to extremism: The importance of context culture and community Dr Hattotuwa states:
Official policies, laws and regulatory frameworks will never address the heterogenous assemblage of actors and platforms intent on undermining democracy, for two reasons. One, they have time on their side, and work towards intended outcomes years if not decades into the future using a combination of electoral, political, social and cultural means, over offline and online vectors. Two, the essential naïveté of social media companies, allowing till recently politicians to get away with inciting hate and violence results in, amongst other things, outdated and outmoded oversight, placing at risk communities who are often already marginalised, and have violence directed against them.
Thus he characterizes mis and disinformation as a threat to democracy but it is difficult to ascertain how it is that he comes to this conclusion.
Ms Hannah is more explicit in her suggestions for dealing with what she and the Project perceive as the problem. In her When Worlds Collide paper she states:
The Report of the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain) requires us, the peoples and organisations and businesses and government of Aotearoa to ka mua, ka muri – walk backwards into the future. This starts with a reckoning with the fundamental and ongoing impacts of the imperial project, some elements of which are already underway. In practise, this will look like mediation of the digital world, including its structures, particularly by those targeted and blamed communities; moderation by communities and via co-created platform guidelines for online spaces and the infrastructures that underpin them; regulation co-developed with communities and responsible to communities; and, finally, classification or censorship, within fundamental principles of Te Tiriti o Waitangi and the Universal Declaration of Human Rights. With these varied tools, communities, civil society, the media, academia, the public service, industry, and Government all have roles to play, in connection with, and in partnership with, each other.
Perhaps the final two solutions are the ones that are consistent with the critical theory approach. She seeks to have content regulated, classified or censored. It is a mystery to me how this can be achieved within the fundamental principles of the Treaty.
I would have thought that a more fundamental methodology would be through the law. As to the UDHR she may or may not be aware that those principles are incorporated into the Human Rights Act 1993 and the New Zealand Bill of Rights Act 1990 affirms New Zealand’s commitment to the International Covenant on Civil and Political Rights
The Project’s penultimate paper - Dangerous speech, misogyny, and democracy: A review of the impacts of dangerous speech since the end of the Parliament Protest – sets out six recommendations which I pointed out in Part 2 of this study.
I shall repeat them with comments
1. Immediate review of the electoral legislation candidate disclosure requirements ahead of the 2023 General Election.
2. Expedited review of the regulations for the Companies Register to address discoverability of disclosure requirements and redaction criteria for individuals at risk.
3. Establishment of collective work programme across Privacy Commission and Human Rights Commission on related published registers such as vehicle registration and the Electoral Roll to ensure balance of access to voting rights and protection of privacy.
The matters contained in Items 1-3 arise from the concerns of the Project arising from the involvement of contrarians in civil society and more particularly in local body elections and national elections. It is axiomatic that any citizen who fulfils the requirements for election to public office is entitled to stand. Their affiliation with a party or an ideology may accompany that.
The matters raised in items 2 and 3 seem to relate to access to public records. As matters stand access to vehicle registration information is difficult for other than a registered owner. It is unclear whether the Project is seeking disclosure of this information. Likewise there is a public interest element in disclosing the names of those involved in companies registered with the Companies Office.
4. A full review of the existing legal and civil remedies, particularly the Harmful Digital Communications Act, Netsafe, and the absence of advocacy as a core deliverable. This review should look at civil and criminal harassment within the context of online, stranger-led hate and harassment.
The subtext to this proposal involves significant interferences with the freedom of expression. A full review of existing legal and civil remedies (I think the Project means criminal and civil remedies because civil remedies are in fact legal remedies and as written the phrase contains redundancies and a lack of understanding of law and legal structures) must necessarily have wide implications for freedom of expression
5. Systems-wide approach to the Content Regulatory Review, including the regulatory frameworks for Netsafe as lead agency for the HDCA and the Domain Name Commission’s regulatory framework for .nz CCLD.
A Content Regulatory Review dealing with media and online content regulation was last updated on the Department of Internal Affairs website on 27 June 2022. Options were to be put forward for public consultation on a proposed new framework. It appears that this project has stalled.
The Project suggests that the Content Regulatory Review include Netsafe as the Approved Agency (not the lead agency) under the Harmful Digital Communications Act. There is no justification for the sweeping inclusion of an NGO in a wider regulatory model if indeed that is what is proposed.
The Domain Name Commission, likewise, is a private organization associated with InternetNZ and responsible for the registration of Domain Names. How this would or could be included in a Content Regulatory Review is difficult to determine.
6. The establishment of a transparent, outside government entity to provide research, analysis and advice for communities, civil society organisations, agencies and independent crown authorities on information disorders and their impacts in Aotearoa New Zealand.
As I observed in Part 2 this recommendation seems to be a pitch for future involvement of the Disinformation Project at a higher level than before. As I noted in Part 2 the proposal in paragraph 6 would suggest the ultimate appointment of an Information Disorders Tsar which would have serious implications for the freedom of expression, especially if that “Tsar” were to be the Disinformation Project or its members.
What does all this tell us? It certainly makes it clear that as part of its development of a critical theory approach political and legal change is a part of the Projects advocacy. What is clear as well is that the Project wishes to control to a greater or lesser degree the flow and quality of information available to members of the public.
This has significant implications for freedom of expression. I raise this although I realise that in the mind of Ms Hannah this makes me a right-wing extremist, but she is entitled to her opinion as I am to mine.
Freedom of Expression and Information Flows
The reasons that I have concerns for the approach of the Disinformation Project to freedom of expression and information flows are as follows:
1. Although they have defined misinformation, disinformation and malinformation, with the exception of the discussion of “cockroaches” in the Dangerous Speech paper no examples of statements falling within those definitions has been given
2. The definitions, especially of misinformation are wide enough to include any contrarian opinion
3. It would seem therefore that the Disinformation Project is adopting the approach of Humpty Dumpty in “Through the Looking Glass”
“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
’The question is,’ said Alice, ‘whether you can make words mean so many different things.’
’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”
That approach is uncertain, unclear and arbitrary
4. The Disinformation Project, in all of its papers, has completely overlooked the provisions of section 14 of the New Zealand Bill of Rights Act 1990.
5. Section 14 guarantees the freedom to express information – and information is a very wide term indeed and of course can include opinion. I call this the “outward flow” of information.
6. Section 14 also guarantees the receipt of information – what I call the “inward flow” of information.
7. Section 14 thus contains 2 significant guarantees and to interfere with one will interfere with the other. Balancing the restriction of these rights and taking into account whether a restriction of speech would be a justifiable limitation in a free and democratic society seems to have been something overlooked by the Project
8. The material so far advanced by the Disinformation Project – with all of its shortcomings – does not cross the threshold of a justifiable limitation on speech that would require its regulation or censorship
Conclusion
Over the pandemic emergency a greater use of two terms – misinformation and disinformation developed. These became predominantly news media shorthand for any statements that departed from the received wisdom of the government.
Misinformation meant information that misled. Disinformation was false information that the disseminator intended to mislead – in other words lies. The problem was and still is that those words lack certainty. It seems that they mean what people using them want them to mean and consequently they have taken on a perjorative aspect.
In June 2021 the Classification Office, headed by the then Chief Censor Mr David Shanks, released a paper entitled “The Edge of the Infodemic: Challenging Misinformation in Aotearoa”. It argued that misinformation\disinformation (neither term defined in the paper) was a problem, that it came primarily from Internet based sources, that when people rely on misinformation to make important decisions it can have a harmful impact on the health and safety of communities and can also affect us on a personal level, contributing to anxiety, anger, and mistrust.
It argues that that we should be looking at solutions that work to increase access to good information; lower the volume of misinformation; improve resilience to misinformation; and build levels of trust and social cohesion that can serve as a counter to the more harmful effects.
That this document emerged from the Classifications Office is something of a concern. The Classifications Office is involved in the administration of the Films, Videos and Publications Classification Act 1993. That Act allows for censorship of films, videos, publications, and online content in certain limited and restricted circumstances.
It seemed to be part of a concerted effort on the part of the Classifications Office to expand the scope of censorship and information control currently enjoyed by the Classification Office – another example of “mission creep”.
One of the issues that features in the paper is the importance of social cohesion. At first glance this concept is unremarkable. It suggests societal togetherness in the pursuit of common goals.
The problem is in what lies beneath the term. I would suggest that what it really suggests is conformity not so much in behaviour but in thought. The term implies collective agreement or acceptance of a particular narrative – in this case the sole truth that flows from the State.
Thus any expression of disagreement or dissent is seen not only as an affront to the ”truth” propagated by the State but as an assault or an attempt to erode the monolithic structure of “social cohesiveness” or the complacent conformity that the State requires.
Lest it be thought that I am focusing on a single example – “The Edge of the Infodemic” paper – at an Otago University conference about “Social Media and Democracy” in March 2021, Mr Shanks told the conference the way we regulate media is not fit for the future.
“We can be better than this. I think there’s some very obvious moves that we can do here to make the current regulatory system and framework more coherent for a digital environment.” [1]
Before that, in October 2019 Mr Shanks claimed that an entirely new media regulator may be required.[2]
At the Otago University Conference were two representatives of the Disinformation Project – the Director, Ms. Kate Hannah and Dr. Sanjana Hattotuwa.
The Disinformation Project at that time had been observing and analysing open source, publicly available data related to Covid-19 mis- and disinformation on social media, mainstream media, and in physical and other digital forms of information and knowledge dissemination.
From August 2020, the Project expanded its brief (yet more “mission creep”) beyond Covid-19 to consider mis- and disinformation ecosystems in New Zealand, including the seed and spread of ‘dangerous speech’, hateful expression, and criminal behaviour. The scope of the study involved looking at global trends, themes, narratives, and actors who influence online harms in New Zealand.
Although the Disinformation Project is not a State Actor, its commentary and thrust is directed towards material that is considered harmful because it is contrary to the received wisdom that is a part of the Government message. In this way, perhaps unintentionally, the Disinformation Project becomes complicit in the Government as the sole source of truth narrative.
To further emphasise the role of the Disinformation Project, the focus seems to have shifted from mis/disinformation about COVID-19 issues into the wider political scene. Dr Hattotuwa of the Disinformation project in commenting on the role of Voice for Freedom observed that the group skilfully avoids attempts to regulate mis and disinformation and suggests that Voices for Freedom represents a threat to democracy[3]
Clearly from this comment the Disinformation Project is suggesting that there should be some form of regulation of mis or disinformation. In the meantime, as Stephen Judd of Fighting Against Conspiracy Theories Aotearoa (FACT), commenting upon contrarian candidates for local body elections, suggests
“People who hold a set of beliefs about the legitimacy of our institutions, and who are conspiracy theorists and who hide that because they think it would harm their chances of being elected, aren’t operating in good faith.
“So, one of the best things we can do is provide more publicity and exposure to candidates because that ultimately is what leads the public to have a fair view of what they are about.”
Thus we have developing a number of strands that seem to be directed towards suppressing or marginalizing dissent or disagreement. Although the Disinformation Project casts a sinister shadow over the terms, and although the Classification Office may see misinformation and disinformation as having potential objectionable qualities, the reality is that every expression of disagreement or dissent, every expression of a contrary view or opinion, every expression of a challenge to the State message is a part of the normal discourse of society. Disagreement is a fundamental aspect of being human. We all have differing points of view, beliefs, values and standards. And it is part of the democratic tradition that we should be able to express those views.
Of course, associated with that is the fact that those who disagree with us must have the right to express that disagreement. And so the cacophony of debate and the exchange of points of view takes place.
It may be that some points of view are strongly contrarian. Some points of view may be wrong-headed or fly in the fact of reason. But they have a right to be expressed and the speakers have a right to be heard in the same way that those to whom they are speaking have a right not to listen.
The problem is that from the State’s point of view, disagreement and dissent are being treated as inimical to the interests of the State. No longer can dissent be tolerated. It is seen as a weapon of opposition – which it frequently is – but so much so that such opposition is characterized as a war with the State.
In this study I have pointed out what I consider to be deficiencies and concerns with the approach of the Disinformation Project. I have no doubt that they would disagree with much that I have said. But unlike the Project, I do not seek to silence views with which I disagree. The Project is entitled to and should exercise its freedom of expression. But in doing so it would be preferable that it not try and restrict others doing so as well.
Well done. Now you can be included in the disinformation project's data for disinformation under their very clear guidelines. It feels a little like being I would imagine thrown back to maybe Ottoman territory when Europe was burning all its books. An unusual time indeed. Thank you for the work in this analysis.
PC
Thank God: at last an objective, rational analysis of TDP. It's frightening how their rhetoric dominates the print media and airwaves. I actually think the degree to which they seem to influence policy is dangerous to anyone who believes in free speech.