A Halfling's View

A Halfling's View

The Self-Informing Juror - Part 2

Reflections on Exley v R and the right to a fair trial in the age of AI

A Halfling’s View's avatar
A Halfling’s View
Aug 28, 2025
∙ Paid
3
2
1
Share
A painting of a group of people in a courtroom

AI-generated content may be incorrect.

In the first part of this two-part series I discussed the case of Exley v NZME and the way in which the Supreme Court developed the circumstances by which an accused’s right to a fair trial may be protected by the “take-down” of prejudicial material available online. I suggested that the decision was somewhat limited and in this part I explain why.

I discuss the nature of Artificial Intelligence and particularly discuss Generative AI and Large Language Models. I suggest that as much as, possibly even more than, a Google search about an accused, Generative AI may reveal prejudicial information that cannot be the subject of a take-down order and that may be available to a juror who uses a Generative AI platform during the course of trial to carry out his or her own investigations.

Although the Exley case may provide a means by which prejudice may be diluted by restricting access to information by jurors and thus addressing the problem of the Googling Juror, the challenges provided by AI are exponentially more difficult to deal with.

The AI Problem

The Supreme Court in Exley was very careful to ensure that takedown orders had to relate to identifiable material that contained prejudicial content located at a particular URL or Universal Resource Locator.

That identification of material would generally have commenced with a search engine search that located relevant prejudicial material.

The limitation of the test to prejudicial material that is available at an identifiable address or URL immediately narrows the effect of takedown orders to such material. In effect the overall test in the decision is limited such material. Prejudicial material that is not identifiable by way of an address or URL although difficult to retrieve, is not covered by the Exley rubric.

However with the onset of Artificial Intelligence and the development of Generative AI and large language models (LLMs) an additional layer of complexity is introduced in identifying and taking down prejudicial material. Prejudicial material may be accessed via a generative AI platform. That material may be the result of the processes that lie behind AI platforms which effective gather and aggregate information and make that information available in narrative for as the result of a prompt.

I shall now proceed to discuss Artificial Intelligence and focus upon Generative AI and LLMs so that the problems posed become clear and the takedown solutions that have been developed may no longer be effective to address the prejudicial problems arising from AI queries.

Artificial Intelligence Discussed

The subject of Artificial Intelligence is a vast and complex one. The idea of “machine thinking” has been around for some time. Alan Turing devised the “Turing Test” - originally called the imitation game- in 1950. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. For those who have used ChatGPT this description may sound familiar.

Richard Susskind, a law and technology expert who has written widely on the issue of law and technology did his doctorate about aspects of Artificial Intelligence in the 1980’s.

What Turing and Susskind theorized about is now with us.

What is Artificial Intelligence (AI)[1]

“Artificial intelligence” is used broadly to describe the use of computing to replicate tasks done by humans. However, the technology has moved beyond this automation of tasks to what is known as intelligence augmentation, which reflects a symbiotic relationship between humans and technology.

In a speech to the British Institute of International and Comparative Law[2] Sir Geoffrey Vos MR pointed out how common AI has become in our daily lives. He observed that lawyers tend to be very cautious about its use in the legal context. He noted that there is a view among some legal professionals that artificial intelligence is dangerous, prone to bias, and should not be used to facilitate court proceedings or legal advice, even although Google searches and LexisNexis queries utilise AI.

AI Is Already With Us

Keep reading with a 7-day free trial

Subscribe to A Halfling's View to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 David Harvey
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture