Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Best AI Agents 2025: Top Tools, Features & How They’re Changing Productivity

The best AI agents aren't just nice-to-have tools anymore—they're becoming essential productivity partners. After testing more than 25 AI agents for various tasks, including...
HomeTechArtificial IntelligenceUK Supreme Court Questions AI Legal Tools After Major Case Errors

UK Supreme Court Questions AI Legal Tools After Major Case Errors

AI legal tools are facing serious scrutiny after lawyers using artificial intelligence in Victoria encountered severe professional consequences. In the past year, one solicitor had his practising conditions altered, and a law firm was ordered to pay costs after using AI to produce court documents that contained errors. The Victorian solicitor, given the pseudonym Mr Dayal by the court, submitted a list of legal authorities that did not exist and admitted to not checking for accuracy.

As a result of these incidents, the role of artificial intelligence in legal research is being intensively examined. The Victorian Legal Services Board has been forced to address the misuse of generative AI tools for preparing court documents.

Consequently, Mr Dayal’s practising certificate was varied in August of this year, prohibiting him from practising as a principal lawyer or handling trust money for a period of two years. Justice Murphy specifically noted that the false document citations likely came from using AI tools that had the capacity to “fabricate” or “hallucinate” information. These cases involving AI legal assistants and questions about the legality of AI are increasingly becoming the focus of regulators throughout the legal profession. This growing concern raises significant questions about the reliability of AI for legal research and highlights the importance of human oversight.

Why did the UK Supreme Court Question AI Legal Tools?

lawyer paperwork

The United Kingdom’s highest court has raised serious alarms about artificial intelligence tools after discovering fabricated legal citations in multiple high-profile cases. This unprecedented scrutiny arises amid growing evidence that AI-generated legal research tools are producing fictitious case law, which lawyers are submitting to courts without proper verification.

Recent Case Errors

In June 2025, Dame Victoria Sharp, President of the King’s Bench Division of the High Court, delivered a landmark ruling warning lawyers about the dangers of relying on AI-generated research. The court examined two particularly troubling cases where AI tools were suspected of generating false legal authorities. In the Ayinde case, a junior barrister presented five completely non-existent cases as evidence. Similarly, in the Al-Haroun case against Qatar National Bank, a £89m damages claim included 45 case-law citations, of which 18 were entirely fictitious. Furthermore, many other citations contained bogus passages that appeared authentic but had no basis in actual law.

These incidents have prompted the judiciary to investigate how AI hallucinations—convincingly plausible but completely fabricated information—are undermining the integrity of court proceedings. The High Court found that on balance of probabilities, it would have been negligent for barristers to use AI without verifying its outputs.

AI Legality and Professional Standards

The court emphasised that publicly available AI tools like ChatGPT are “not capable of conducting reliable legal research”. Although these systems produce seemingly coherent responses, they frequently make confident assertions that are untrue and cite non-existent sources. This phenomenon presents “serious implications for the administration of justice and public confidence in the justice system”.

In response, the court called on the Bar Council and Law Society to address this problem “as a matter of urgency”. The ruling clearly established that responsibility for ensuring accuracy extends beyond individual researchers to supervising lawyers and even to heads of chambers and managing partners of law firms.

Legal professionals now face potentially severe consequences for AI misuse, including:

  • Public admonishment
  • Wasted costs orders (£2,000 was ordered in one case)
  • Referral to regulatory bodies
  • Contempt of court proceedings
  • Possible criminal charges

Ian Jeffery, the Law Society of England and Wales’ chief executive, noted the ruling “lays bare the dangers of using AI in legal work”. Despite the risks, however, AI tools continue to gain popularity, with one survey finding that 63% of lawyers have used AI in their work.

Related Article: Apple Intelligence: ChatGPT Integration with OpenAI

How are AI Legal Assistants Disrupting Court Processes?

court room

Courts across Australia are encountering an unprecedented challenge as AI legal assistants increasingly disrupt judicial processes. Federal Court documents reveal that multiple cases have been compromised by AI tools generating fictitious legal authorities, creating significant obstacles to justice.

AI-Generated Citations Found to be Fabricated

Legal practitioners submitting AI-generated content to courts have unwittingly presented non-existent cases and authorities. In the Federal Court case of Luck v Secretary, Services Australia, a litigant cited a completely fabricated case to support an application for judicial recusal. Likewise, in Valu v Minister for Immigration, a legal representative admitted using ChatGPT to identify Australian cases, subsequently discovering the tool had provided non-existent case law. Essentially, these AI systems are not conducting research but using probability to predict word sequences that appear plausible yet have no basis in reality.

Google Scholar and Generative AI Tools

Junior legal practitioners are particularly vulnerable to AI pitfalls. In a notable native title claim case, an inexperienced junior solicitor used Google Scholar’s search tool to generate citations, only to discover the tool returned different results when attempting to replicate the search. Moreover, commercial products like ChatGPT, Microsoft Copilot and Google Gemini frequently fabricate case authorities, citations and quotes. These platforms may also reference legislation or legal texts that simply do not exist.

AI for Legal Research

Research indicates that even specialised legal AI products demonstrate alarming hallucination rates between 17 and 33 per cent. The Victorian Supreme Court has explicitly stated that generative AI is not a suitable tool for legal research. AI outputs often contain serious flaws that undermine court processes:

  • Out-of-date information trained on limited datasets
  • Incomplete arguments missing critical legal points
  • Citations from irrelevant jurisdictions
  • Inherent biases from training data

Additionally, AI tools frequently lack consistency in their responses, providing contradictory answers to identical queries. This inconsistency critically undermines the reliability required for legal proceedings and has prompted courts to issue explicit warnings about verification requirements for any AI-generated content.

What are the Risks of Relying on AI in Legal Research?

Legal practitioners face significant risks when relying on artificial intelligence tools for research, with recent evidence revealing fundamental flaws in the application of this technology to legal work.

Hallucinations and False Precedents in Legal Documents

AI hallucinations occur in approximately one in six legal research queries, creating deceptively convincing but entirely fabricated case law. In the Mata v Avianca case, a lawyer submitted six non-existent cases generated by ChatGPT, resulting in fines exceeding AUD 764,500. These “hallucinations” occur because AI generates outputs based on statistical patterns rather than by verifying facts within datasets. Indeed, even specialised legal AI tools cannot eliminate this risk entirely.

Ethical and Confidentiality Concerns

When lawyers input client information into public AI platforms, they effectively place confidential data in the public domain. This practice potentially breaches the attorney-client privilege, as generative AI stores and utilises the information entered to further train its systems. Therefore, lawyers cannot safely enter sensitive or privileged client information into public AI chatbots without risking serious confidentiality violations.

AI Cannot Replace Human Judgment in Law

AI fundamentally lacks the capacity to make ethical judgments or determine normative standards that form the backbone of legal reasoning. Rather than understanding legal principles, AI merely identifies patterns. Legal decisions demand empathy, experience, and nuanced judgment that comes only from years of practice. Notably, AI cannot assess risk/reward dynamics that are often subjective rather than objective.

How are Legal Bodies Responding to AI Misuse?

ai legal tools

Australia’s regulatory agencies have taken strict action in response to mounting worries about the improper use of AI in court. These actions follow multiple incidents where AI tools produced fictitious case citations in court documents.

New Guidelines From Courts and Legal Boards

In January 2025, the Supreme Court of NSW issued Practice Note SC GEN 23 on generative AI use, while the Federal Court of Australia released a similar notice in April. Throughout 2025, various courts established verification protocols that required lawyers to confirm they had independently verified any AI-generated research. Queensland Courts released comprehensive guidelines for judicial officers, stating that AI tools should not be used for decision-making or preparing judicial reasons. According to these guidelines, judges may request confirmation that lawyers have verified the accuracy of AI-generated citations.

Statements from Victoria, NSW, and WA on AI Use

The Law Society of NSW, Legal Practise Board of Western Australia, and Victorian Legal Services Board jointly issued a landmark statement emphasising four key obligations for lawyers using AI:

  • Maintaining client confidentiality and not entering privileged information into public AI tools
  • Providing genuinely independent advice
  • Delivering services competently while verifying AI outputs
  • Ensuring fair and reasonable billing practices

Meanwhile, the Victorian Legal Services Board identified improper AI use as a “key risk” in its latest risk outlook, noting that “unlike a professionally trained lawyer, AI can’t exercise superior judgement”.

Potential Disciplinary Actions and Education Mandates

Following documented misuse, authorities have established clear penalties for such actions. In August 2025, a Victorian lawyer known as “Mr Dayal” had his practising certificate varied after submitting AI-generated false citations. Accordingly, he lost the right to practise as a principal lawyer, handle trust money, or operate his own practice for two years. In addition to individual sanctions, more than 20 lawyers across Australia have been referred to regulatory bodies for similar infractions. If misconduct is proven, the Victorian Board said it would think about bringing cases before the Victorian Civil and Administrative Tribunal. .

Conclusion – AI Legality

The rising concerns surrounding AI legal tools highlight a critical juncture for the legal profession across the UK and Australia. Undoubtedly, the cases of fabricated citations, non-existent precedents, and hallucinated legal authorities demonstrate the significant limitations of current AI technology. These shortcomings have led to serious professional consequences, including practising restrictions and financial penalties for lawyers who failed to verify AI-generated content.

Legal regulatory bodies have responded accordingly, establishing clear guidelines that emphasise verification requirements and outlining potential disciplinary actions. The Victorian Legal Services Board, alongside authorities in NSW and WA, has taken a firm stance against unchecked AI use, particularly highlighting the technology’s inability to exercise proper legal judgment. Although AI promises increased efficiency, these cases reveal a technology that is still unready for unsupervised application in high-stakes legal contexts.

Looking ahead, the legal community must develop more sophisticated approaches toward AI integration. This will require enhanced education, stricter verification protocols, and perhaps specialised AI tools designed specifically for legal research with reduced hallucination risks. Until then, the cases highlighted throughout this article serve as a stark warning about the dangers of over-reliance on artificial intelligence. After all, while technology continues to evolve rapidly, the foundational principles of legal practice—accuracy, integrity, and human judgement—remain irreplaceable.

You Might Also Be Interested In: AI in Finance: Good or bad for World of Financial Services in 2024

What prompted the UK Supreme Court to question AI legal tools? 

The UK Supreme Court raised concerns about AI legal tools after discovering fabricated legal citations in multiple high-profile cases. This scrutiny was triggered by incidents where AI-generated research produced fictitious case law that lawyers submitted to courts without proper verification.

How are AI legal assistants affecting court processes?

AI legal assistants are disrupting court processes by generating fabricated citations and non-existent legal authorities. This has led to compromised cases and significant obstacles to justice, with multiple instances of lawyers unknowingly presenting false information in court documents.

How are legal regulatory bodies responding to AI misuse?

Legal bodies are implementing strict measures, including new guidelines for AI use, verification protocols, and potential disciplinary actions. They are also issuing joint statements emphasising lawyers’ obligations when using AI and considering prosecutions for proven misconduct.

Can AI completely replace human lawyers in legal research?

No, AI cannot completely replace human lawyers in legal research. While AI can assist in research processes, it lacks the capacity for ethical judgment, empathy, and nuanced reasoning that comes from years of legal practice. Human oversight remains crucial in legal work.