Introduction
Does the Harmful Digital Communications Act 2015 (HDCA) apply to “Deep Fakes”? That is a question this article sets out to answer. It has been prompted by Act MP Laura McClure’s proposed amendments to the Crimes Act and the HDCA. She has introduced the Deepfake Digital Harm and Exploitation Bill. The question is whether or not such proposed legislation is necessary?
I shall commence the inquiry by considering the relevant definitions in the HDCA along with its structure.
I shall then consider what a “Deep Fake” is. Then I shall consider whether or not a Deep Fake can fit within the definitional scope of the Act, and whether the remedies provided in the HDCA would be available.
I shall then go on to consider the proposed provisions of the Deepfake Digital Harm and Exploitation Bill and consider whether or not, in light of my examination, those provisions are in fact necessary.
The Purpose of and Definitions in the HDCA
The Purpose of the HDCA
The purposes of the Act are set out in Section 3. There are two purposes. First, the Act is to deter, prevent, and mitigate harm caused to individuals by digital communications. Secondly the Act is to provide victims of harmful digital communications with a quick and efficient means of redress.
The Definitions in the HDCA
The definitions in the HDCA are important in assessing its scope.
Let’s start with the definition of a digital communication contained in section 4 of the HDCA. That term means
”(a) any form of electronic communication; and
(b) includes any text message, writing, photograph, picture, recording, or other matter that is communicated electronically”
The essence of a digital communication is that it is a means of communication. It must be by electronic means. Although “digital communication” may have been preferable, digital information can only be communicated by electronic means.
Electronic is defined in the Oxford English Dictionary as
“ Of or pertaining to electronics; esp. of something operated by the methods, principles”
and electronics is defined as
“That branch of physics and technology which is concerned with the study and application of phenomena associated with the movement of electrons in a vacuum, a gas, a semi-conductor, etc., as in thermionic valves, X-ray tubes”
For the purposes of the HDCA there must be a communication – that is the transfer of information which may be in text, writing, photograph, picture or recording – using electronic principles.
Digital information is communicated electronically as the definition states. Unless digital information is reduced to its basic representation of one and zeroes and is printed out the only way that digital information can be communicated is by electronic means. The reason for this is that digital information is – at its most fundamental level – positive and negative electric impulses contained upon an appropriate medium.
Within the online context there is only one way of communicating digital information and that is by electronic means – that is by a device that is connected to the Internet or some other digital communications systems such as a 4G or 5G mobile phone network.
Clause (a) allows for any form of electronic communication. The scope of that definition is not modified by (b) which gives exemplars of the type of content that may be communicated electronically. Although examples are given, the final words of clause (b) are very wide – “other matter that is communicated by electronic means.”
What may not be understood is that voice communications could be caught by the definition. In the days of dial-up telephones they might have been excluded. However, with the end of the “copper wire” network and the use of digital technologies and Voice Over IP (VOIP) verbal communications by phone or mobile device would fall within the definition along with, of course, text messages and other electronically communicated content.
Any information that is communicated on the Internet or any platform – like Facebook – that is “bolted on” to the Internet backbone constitutes a digital communication and that, in my view, is sufficiently well-established to be the subject of judicial notice.
The focus at this stage need not be so much on the content as upon the means of communication. But lawyers have ever been more focussed upon the content layer rather than the transport layer.
In my opinion the scope of the definition of electronic communication along with the fact that it is a means of communication is much misunderstood.
How does the Act address the act of engaging in an electronic communication? That is dealt with by the definition of “post” in relation to a digital communications which is defined as:
“(a) means to transfer, send, publish, disseminate, or otherwise communicate by means of a digital communication—
(i) any information, whether truthful or untruthful, about the victim; or
(ii) an intimate visual recording of an individual; and
(b) includes an attempt to do anything referred to in paragraph (a)”
This definition describes the way in which a digital communication may be transmitted or sent.
It also deals with the content that may be sent. The content may be any information. It matters not that the information may be truthful or untruthful.
The issue of truth occurs in other sections of the Act. It is clear that truth is not a defence to posting a harmful digital communication. The Act envisages that there may be occasions where a truthful digital communication may cause harm.
This conceptual approach creates an anomaly with defamation law if a person uses the HDCA to address reputational harm. Furthermore, truth, if expressed in a form other than a digital communication will amount to a defence to an allegation of reputational harm. This highlights a distinction drawn by the Act between on-line speech and “real world” speech.
“Post” involves the communication of any information about another person who, as a result of the provisions of the Act, may become a victim.
“Victim” is defined in section 4 but only in relation to section 22 (which creates the offence of posting a digital communication that causes harm) or section 22A which relates to the offence of positing an intimate visual recording.
The term “affected individual” is referred to in the civil enforcement regime.
A further important definition to be considered for this discussion is that of an “intimate visual recording”. This definition is identical to that which appears in the Crimes Act with one important exception.
The Crimes Act definition applies to non-consensual visual images – those that are recorded without the knowledge or consent of the person who is the subject of the recording. The definition in the Harmful Digital Communications Act is applicable to those recordings made with or without the consent of the subject.
The wording of the definition is as follows:
An intimate visual recording
(a) means a visual recording (for example, a photograph, videotape, or digital image) that is made in any medium using any device with or without the knowledge or consent of the individual who is the subject of the recording, and that is of—
(i) an individual who is in a place which, in the circumstances, would reasonably be expected to provide privacy, and the individual is—
(A) naked or has his or her genitals, pubic area, buttocks, or female breasts exposed, partially exposed, or clad solely in undergarments; or
(B) engaged in an intimate sexual activity; or
(C) engaged in showering, toileting, or other personal bodily activity that involves dressing or undressing; or
(ii) an individual’s naked or undergarment-clad genitals, pubic area, buttocks, or female breasts which is made—
(A) from beneath or under an individual’s clothing; or
(B) through an individual’s outer clothing in circumstances where it is unreasonable to do so; and
(b) includes an intimate visual recording that is made and transmitted in real time without retention or storage in—
(i) a physical form; or
(ii) an electronic form from which the recording is capable of being reproduced with or without the aid of any device or thing
Immediately there is a problem. A visual recording involves a real world subject because the recording of an event or a person in certain circumstances is envisaged. It must be visual but the use of the word “recording” which involves the creation of a record of a real-world event is essential to the definition.
That conclusion is reinforced by the exemplars that are provided - a photograph, videotape, or digital image – coupled with the fact that the recording must be made without the knowledge or consent of the individual the subject of the recording. Thus, an actual person in actual circumstances rather than an AI created image is envisaged by the definition.
The final definition to be discussed is that of “harm”. For the purposes of the Act a digital communication must cause harm.
Harm is defined as serious emotional distress. Mere emotional distress is insufficient. The emotional distress must be severe. Furthermore the definition is directed towards how a person “feels” about a digital communication.
The matters to be considered in determining whether or not a digital communication may cause harm are set out in section 22(5) in respect of the offence of causing harm by a digital communication and section 19(5) in respect of the type of order that may be made in the Civil Enforcement Regime.
The Structure of the HDCA
Communications Principles
The first important part of the Act is section 6 which contains ten communications principles. The principles are important in that they provide a litmus test defining what a digital communication should not be. These principles are important in the Civil Enforcement Regime.
The Civil Enforcement Regime
The term “Civil Enforcement Regime” does not appear in the Act but was a term used by the Law Commission to describe a process whereby an individual affected by a harmful digital communication may avail him or herself of civil remedies in the District Court.
Before the Court may consider an application, the matter must be referred or have been considered by an Approved Agency. The Agency may resolve or attempt the resolve the matter. If it does not do so then and only then may the Court go on to consider whether or not to make orders under section 19.
The Offence Sections
As has already been noted the Act creates a criminal offence of causing harm by a digital communication.
The Act also makes it an offence to comply with an interim order of the Court pursuant to section 18 or a final order pursuant to section 19.
Having set out the purposes and the scope of the HDCA I shall now turn to consider the nature of a “DeepFake”.
What is a “DeepFake”?
A deepfake is a form of synthetic media—typically video, audio, or images—created or manipulated using machine learning and artificial intelligence (AI), specifically deep learning algorithms, to convincingly mimic a person’s likeness, voice, or actions in a way that is difficult to distinguish from authentic content.
The term "deepfake" is a portmanteau word combining "deep learning" (a subset of AI involving neural networks with many layers) and "fake," reflecting the use of advanced machine learning to generate or alter media content.
Deepfakes can involve both the manipulation of existing media (e.g., replacing one person’s face with another in a video) and the creation of entirely synthetic content that never existed in reality.
How DeepFakes Work
Deepfakes are typically created using advanced deep learning frameworks, most notably Generative Adversarial Networks (GANs). What follows is a simplified breakdown of the process:
1. Data Collection: A significant dataset of images, videos, or audio related to the target subject is collected. The more diverse and comprehensive this data, the more realistic the final deepfake will be.
2. Training: Deep learning algorithms are trained on this collected data. This involves analyzing facial features, expressions, movements, and speech patterns to understand how the subject looks and behaves in various contexts.
3. Generative Adversarial Network (GAN): A GAN consists of two neural networks working in opposition:
· Generator: This network creates the fake content (e.g., an image of a person's face).
· Discriminator: This network attempts to distinguish between real content and the fake content produced by the generator. These two networks engage in an iterative feedback loop. The generator continuously refines its output to trick the discriminator, while the discriminator gets better at detecting the fakes. This adversarial process drives the generator to produce increasingly realistic and difficult-to-detect synthetic media
4. Refinement: The initial output from the GAN is often imperfect. An iterative improvement process follows, which might involve further training, manual adjustments, or using additional AI tools to enhance the realism, such as maintaining consistent lighting, articulating facial movements, and adapting to the target's expressions.
Types of DeepFakes
Some examples of DeepFakes are as follows:
· Face-Swapped Videos: The most iconic form, where a subject's face is overlaid onto another person's body in motion.
· Lip-Syncing & Audio Overlays: Manipulating mouth movements to match synthetic or manipulated audio
· Voice Deepfakes: Generating new audio that sounds like a specific person's voice
· Synthetic Identities: Creating entirely new, realistic-looking images of people who don't exist
Ethical and Legal Implications
The development of DeepFakes present ethical and legal challenges. Some examples are:
· Deception and Misinformation: Deepfakes can spread false narratives, impersonate public figures, and defame individuals, eroding trust in media and democratic processes
· Privacy Violations: Creating or altering content using an individual's likeness without their consent raises serious privacy concerns and can lead to emotional distress and reputational damage, especially in cases of non-consensual explicit content
· Corporate Fraud and Financial Crimes: Deepfakes can be used to impersonate executives for fraudulent financial transactions or to bypass biometric security measures
· Intellectual Property Rights: The use of existing images, videos, or audio for training deepfake models can infringe on intellectual property rights
· Legal Challenges: Existing legal frameworks, such as defamation, copyright, and privacy laws, often struggle to address the unique challenges posed by deepfakes.31 This has led to a push for new, specific AI legislation globally
· Erosion of Trust: The widespread availability and increasing sophistication of deepfakes make it harder for individuals to distinguish between real and fabricated content, undermining societal trust in digital information and institutions
Can DeepFakes Fall Within the Scope of the HDCA
As an example let us take the case of Laura McClure MP who has proposed the Deepfake Digital Harm and Exploitation Bill. She reported in the House that she was depicted in an AI generated image portraying her as naked. The photo was blurred.
The image was created by Ms. McClure herself. As reported in Hansard for 14 May 2025, she stated:
“This image is a naked image of me, but it is not real. This image is what we call a "deepfake". It took me less than five minutes to make a series of deepfakes of myself. Scaringly, it was a quick Google search for the technology of what's available. When you type in "deepfake nudify" into the Google search with your filter off, hundreds of sites appear.”
She went on to say
“I didn't need to enter an email address, I just had to tick a box to say I was 18 and that it was my image and I could use it—which is pretty scary, to be honest. It wasn't on an app that I downloaded, because actually the App Store has a pretty good filter to actually deal with quite a few of these things. It was just on our internet, not the deep dark web.”
Although initially it seemed that Ms. McClure was concerned about the technology and its ease of access she shifted to the real message of her speech to the House.
“Creating and posting deepfake porn without consent is a form of image-based abuse. Vaughan Couillault, the president of the Secondary Principals' Association of New Zealand, said, "It's not as low level and as simple as the perpetrator might think it is. It's not cheap entertainment, it's life damaging work." Catherine Abel-Pattinson of Netsafe is quoted as saying, "Every day we answer the phone and we have suicidal people on the other end because stuff like this has been sent and it's not them."
While researching for this bill, I heard stories—mostly from youth and nearly always female. All talked about the lack of consequence or support for victims in schools for this type of bullying and abuse. In a news report last year, it talked about two schools reeling from the spread of social media of deepfake pornographic images of their students. Fifteen of those students were at a school in North Canterbury quite close to me. Around 50 at another location have asked to not be named.”
The concern, therefore, was for the harmful effect that deepfakes may cause. Ms. McClure then stated the purpose of her Bill
“Current legislation does not keep up with this tech and the world we live in. There is a grey area, and we haven't defined the use of AI for synthetic, sexually explicit content, images, or content providing harmful intent without this clarification, and it needs changing.”
An observation that needs to be made is that the image that Ms. McClure presented to the House, although created using digital tools and rendered into digital content, does not give her a remedy under the HDCA, primarily because she created it and made it available to the House herself. However, it provided a backdrop of indignant outrage which provided a context for her further remarks.
Let us leave Ms. McClure’s image and consider a hypothetical.
Using the online tools that Ms McClure used a person creates a Deepfake image of me that is untrue, defamatory and embarrassing. The file containing that image is in digital format. It may be printed out which would render it into a kinetic format.
However, for the purposes of this example let us assume that the same person who “created” the image releases it to an online site like Facebook.
Let us then assume that I am told of this and after seeing the image I suffer serious emotional distress and require medical treatment and counselling for trauma.
The first thing is that there has been an electronic communication of the image. It matters not that it is not a “real” image and it does not fall within the scope of an intimate visual recording.
The electronic communication has taken place by virtue of the fact that the image has been posted to Facebook. It may not be a “text message, writing, photograph, picture, recording” but it is “other matter that is communicated electronically” and also it falls within the definition of any form of electronic communication.
The element of posting is complete by virtue of the fact that by placing the image on Facebook the person has transferred, sent, published, disseminated, or otherwise communicate by means of a digital communication.
So what is clear is not the nature of the content but rather the fact of the communication of the content.
The issue of whether or not the remedies available under the HDCA are applicable are not so much directed at content as to the consequences of the communication of the content.
If a civil enforcement remedy were sought against the person posting the content of Facebook as the online content host then the matters to be considered under section 19 would have to be taken into account along with whether or not there was a breach of the communications principles set out in section 6 of the HDCA.
If there were to be a criminal prosecution the elements of an offence under section 22 HDCA would have to be established. The issues there would be:
a. Did the person post a digital communication
b. In posting the communication did the person do so with the intention of causing harm
c. Would the positing of the communication cause harm to an ordinary reasonable person in the position of the victim.
d. Did the posting of the communication cause harm to the victim.
We have seen from the interrelationship of the definition of an electronic communication and that of posting an electronic communication the nature of the communication may be extraordinarily wide and it is both the intention of the person posting and the consequence suffered by the victim that will involve a consideration of the quality of the material.
But it is clear to me that a Deepfake image would qualify for consideration as an electronic communication if it were posted. And in that respect the HDCA is engaged.
What Is Contained in the Deepfake Digital Harm and Exploitation Bill
The Deepfake Digital Harm and Exploitation Bill claims that Deepfakes may misappropriate a person’s image for exploitative purposes, causing reputational, psychological, and often material harm.
It proposes amendments to the Crimes Act and the HDCA to expand the definition of an "intimate visual recording" to explicitly include images created, synthesised, or altered to show a person’s likeness produced without consent.
This would remove the problem in the current definitions where the recorded image must be of a live person and it must fulfil a number of other characteristics that make it intimate.
The focus of the amendments is upon the applicability of the criminal law to Deepfakes.
The amendment proposed to the Crimes Act states:
4 Section 216G amended (Intimate visual recording defined)
(1) After section 216G(1), insert:
(1A) In sections 216H to 216N, intimate visual recording includes a visual recording that has been created, synthesised, or altered without the knowledge or consent of the person who is the subject of the recording, and appears to show the person—
(a) naked or with their genitals, pubic area, buttocks, or female breasts exposed, partially exposed, or clad solely in undergarments; or
(b) engaged in an intimate sexual activity; or
(c) engaged in showering, toileting, or other personal bodily activity that involves dressing or undressing.
(2) After section 216G(2), insert:
(4) In this section and section 216N, subject, in relation to an intimate visual recording, means an individual who is, or appears to be, featured or depicted in the recording.
The amendment proposed to the HDCA reads similarly
6 Section 4 amended (Interpretation)
(1) In section 4, definition of intimate visual recording, after paragraph (a) insert:
(ab) includes a visual recording that has been created, synthesised, or altered without the knowledge or consent of the person who is the subject of the recording, and appears to show the person—
(i) naked or with their genitals, pubic area, buttocks, or female breasts exposed, partially exposed, or clad solely in undergarments; or
(ii) engaged in an intimate sexual activity; or
(iii) engaged in showering, toileting, or other personal bodily activity that involves dressing or undressing.
(2) In section 4, insert in its appropriate alphabetical order: subject, in relation to an intimate visual recording, means an individual who is, or appears to be, featured or depicted in the recording
Thus the critical feature of the proposed amendments relate to the way in which the image is created. An ordinary photograph would fall within the definition of a created image. A Deepfake would also qualify as a “created image”. The applicability to Deepfakes lies in the words “synthesised or altered” which encompass digital manipulation which may be by means of Photoshop, AI or a Deepfake engine. Therefore, the scope of the nature of “recording” has been expanded to encompass images that have been created, synthesised or altered.
Two issues arise from this.
The first is that if the extended definition is applicable, the provisions of section 22A HDCA are available, whereas, under the current law they are not and the only criminal remedy is available under section 22.
Prior to the enactment of section 22A concerns were expressed that proof of all the elements that constitute an offence under section 22, especially that of the intention to cause harm – serious emotional distress – created a barrier to brining those who post such images to account.
The elements of section 22A specifically exclude the requirement on the part of the person posting the communication to intend to cause harm.
Section 22A HDCA provides that an offence is committed if a person
1. Without reasonable excuse posts a digital communication
2. That is an intimate visual recording of a victim
3. Knowing that the victim has not consented to the positing or
4. Being reckless as to whether the victim consented to the posting.
If the amendment were passed an intimate Deepfake of an identifiable individual victim would amount to an offence in which it would not be necessary to prove an intention to cause harm.
There is one problem. It relates to the proposed changes to the HDCA.
Currently the definition of an intimate video image under the HDCA refers to those that are consensual as well as non-consensual.
The new definition which would include deepfakes would remove the consensual element.
My suggestion is that the definition of an intimate visual recording in the HDCA should read as follows
includes a visual recording that has been created, synthesised, or altered with or without the knowledge or consent of the person who is the subject of the recording, and appears to show the person
The words in bold maintain the applicability to consensual intimate deepfakes as well as any other created image.
Why are those words necessary?
At the moment an intimate visual photographic image made with the consent of the subject which is later distributed electronically on the internet can be the subject of a prosecution under s. 22A
If the wording as proposed by Ms McClure were to remain if a person distributed a consensual visual image (photo or deepfake) the prosecution would have to be under section 22 with the added element of intention to harm which would have to be proven. This would be a backwards step in my view.
The wording that I propose means that a consensually made image (photo or deepfake) which is later (and without the consent of the subject) post online could be the subject of a prosecution under section 22A without the need to prove intention to harm.
Conclusion - Is There a Need for the Changes Proposed?
As with the Social Media Age Verification Bill (which provides for age-based restriction to social media access and not an all-encompassing ban) there has been some public and media misunderstanding about the Deepfake Bill.
The Bill only applies to Deepfakes that involve the Deepfake creation of intimate images using a victim’s appearance. It does not apply to Deepfakes generally.
Indeed, as I have demonstrated, Deepfakes can fall within the scope of the HDCA and even if they do not fall within the intimate image requirements could still form the basis for Civil Enforcement orders or a prosecution under section 22.
So are the changes necessary? The answer must be yes.
It would be an anomalous position that the provisions of section 22A might apply to a photograph of an identifiable person but not to an artificially created representation of that person that contained the elements of an intimate image.
It must also be understood that the changes apply only to the provisions of the Crimes Act relating to intimate visual recordings and the HDCA offence under section 22A. Thus the proposed amendments are of limited scope and do not address the wider issues of Deepfakes as vectors for scams, fraud or breaches of IP rights.
But in the final analysis the confusion over the applicability of the HDCA to Deepfakes arises as a result of the conflation of communication with content. A Deepfake that is disseminated online involves aspects of communication. The remedies that are available then involve an enquiry into the nature and quality of the content.