LONDON — Lawyers who rely on artificial intelligence (AI) tools to cite non-existent legal cases may face contempt of court or even criminal prosecution, the High Court in London has warned, highlighting growing concerns about the misuse of generative AI in legal practice.
In a strongly worded ruling delivered on Friday, Judge Victoria Sharp, President of the King’s Bench Division, criticised legal representatives in two recent cases for including references to fictitious case law—allegedly generated by AI-powered tools like ChatGPT.
The judge emphasized that such conduct threatens both the integrity of judicial proceedings and public confidence in the justice system.
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” Judge Sharp said in her ruling.

She stressed that lawyers citing false legal precedents breach their professional duty not to mislead the court, a violation that could constitute contempt. In particularly severe cases, this may escalate to the criminal offence of perverting the course of justice, she warned.
The judgment comes amid a spate of international incidents involving legal professionals submitting AI-generated documents filled with fabricated authorities—cases that have sparked disciplinary action and public scrutiny.
Judge Sharp called on legal regulators and law firm leaders to take “practical and effective measures” to prevent such incidents. While existing guidance on AI use by lawyers exists, she said, “guidance on its own is insufficient to address the misuse of artificial intelligence.”
Legal experts note that while generative AI can assist in drafting and research, it must be used with rigorous verification and oversight to ensure that outputs do not compromise ethical and professional standards.
The ruling signals a growing urgency within the UK legal system to regulate AI use more strictly and protect the administration of justice from technological misuse.
