AI Enters the Witness Box
A judge in a London court has dismissed the testimony of a witness after discovering he was receiving real-time coaching through a pair of smartglasses connected to a ChatGPT-like AI system during his testimony. The incident—which the witness attributed to arrangements made by his legal team without his knowledge—raises serious questions about AI-assisted deception in legal proceedings and the adequacy of existing courtroom protocols to detect it.
What Happened in the Courtroom
The witness appeared in court wearing what appeared to be ordinary eyeglasses. During his testimony, opposing counsel noticed behavioral cues suggesting he was reading from or responding to information being fed to him visually rather than answering questions from memory and personal knowledge. The judge ordered the glasses examined, and they were found to be smartglasses with a small display capable of showing text—connected wirelessly to an AI system that was being fed transcripts of the proceedings and prompted to generate suggested answers in real time.
When confronted, the witness claimed he was unaware of the capability of the glasses he was wearing and that his legal team had provided them with instructions he had not fully read. The judge found the explanation implausible given the circumstances and struck the witness's entire testimony from the record. The case raises questions about potential contempt proceedings and professional responsibility for the legal team involved.
How the Technology Works
The setup described in this case is technically straightforward to assemble from commercially available components. Modern smartglasses can display text in a corner of the visual field that is visible to the wearer but not obviously apparent to observers. A wireless audio input—a small microphone—can capture courtroom audio and transmit it to a connected device. A language model API can process the audio transcript and generate suggested responses nearly instantaneously. The entire system can fit in ordinary-looking accessories and a smartphone.
The accessibility of this technology is what makes the incident so concerning from a legal system perspective. Prior to the availability of consumer-grade AI systems and smartglasses, achieving real-time courtroom coaching required earpieces and human operatives monitoring proceedings and feeding information—a more complex and detectable operation. AI reduces the complexity to a point where a sophisticated individual or legal team could conceivably deploy it without specialized equipment suppliers or coconspirators.
The Broader Problem of AI-Assisted Deception
The courtroom incident is an extreme case of a phenomenon that is becoming increasingly common across institutional settings: the use of AI to augment human performance in contexts where unaided performance is required or expected. Examination boards are grappling with AI-assisted cheating. Job interview panels are encountering candidates who appear suspiciously well-prepared through AI coaching. Medical licensing examinations are reviewing AI-cheating incidents.
Courts are particularly sensitive settings because the entire judicial system depends on witnesses providing honest, unassisted recollections. AI coaching introduces the possibility not just of better-presented testimony but of systematically guided testimony—where an AI advises a witness not just on clarity of expression but on what to say, what to omit, and how to respond to specific questions in legally advantageous ways. The line between coaching and suborning perjury becomes blurry when AI can optimize testimony in real time.
Institutional Responses
Courts in multiple jurisdictions are beginning to develop protocols for detecting AI assistance during testimony. These include requiring witnesses to remove electronic devices and accessories before entering the witness box, using radio frequency detectors to identify active wireless transmissions during proceedings, and flagging anomalous response patterns—such as uncharacteristically precise or legally sophisticated answers from witnesses who are not themselves lawyers—for additional scrutiny.
The UK case is likely to accelerate the development of more formal standards and detection protocols. Legal professional bodies may also need to address the ethical responsibilities of legal teams who consider deploying AI assistance in ways that compromise the integrity of witness testimony. The technology exists and the incentives to use it are significant—the legal system's adaptation will determine whether it becomes a recognized problem or an occasional scandal.
This article is based on reporting by 404 Media. Read the original article.




