Judges flag AI errors in rulings, sparking calls for safeguards

Judges flag AI errors in rulings, sparking calls for safeguards

by

in

Two federal judges have raised alarms regarding the use of artificial intelligence (AI) tools in their chambers, attributing errors in legal rulings to faulty AI-generated outputs and prompting scrutiny over the review processes they implement. U.S. District Judges Julien Neals from New Jersey and Henry Wingate from Mississippi disclosed these concerns in letters to the Administrative Office of the U.S. Courts, sent on October 20 and 21. Their admissions surfaced following an inquiry by Senate Judiciary Committee Chairman Chuck Grassley.

The implications of these mishaps extend beyond the individual cases involved, as experts voice concern about the judiciary’s reliance on technology without sufficient verification. Bruce Green, a law professor at Fordham University, emphasized that the judges’ admissions provoke questions about their diligence in reviewing documents issued under their names, questioning if draft opinions are frequently filed without adequate scrutiny.

Judge Neals indicated that a law school intern employed ChatGPT for legal research, leading to an order issued on June 30 that included fabricated case citations. He noted that the intern did not have access to any confidential information during this process. Similarly, Judge Wingate reported that his law clerk utilized the AI tool Perplexity as an assistant in drafting, mistakenly referencing unconnected parties and allegations in a July 20 temporary restraining order. Wingate acknowledged that the draft should never have been filed, citing a breakdown in the final review protocol.

Despite these errors, experts like Stephen Gillers from New York University stated that judges must ensure the accuracy of citations, regardless of whether the information stems from AI or traditional research avenues. He pointed out that judges have a fundamental obligation to read and verify the cases they cite.

In response to the incidents, Judge Wingate announced that moving forward, all drafts in his chambers will be subject to review by a second law clerk before submission. He also mandated that printed copies of referenced cases be included with decisions. Meanwhile, Judge Neals admitted that the use of ChatGPT contravened his chamber’s policy against generative AI in legal research and drafting. He has taken steps to formalize this policy in writing.

While some experts suggest that banning AI entirely may be excessive given its potential as a valuable research tool, they advocate for improved training in its responsible use. Gillers suggested, “Judges should learn how to use AI effectively and cautiously.”

In light of these revelations, Senator Grassley has initiated an investigation, emphasizing the need for clear policies governing AI usage within the judicial system. He expressed that the judicial branch must develop comprehensive policies to maintain integrity and factual accuracy, as reliance on technology should not compromise the rights of litigants.

To aid in this effort, the Administrative Office of the U.S. Courts is reviewing the situation, calling for independent verification of AI-generated content and cautioning against adsorbing core judicial functions by relying heavily on AI tools. This incident highlights the necessity of maintaining diligence within the judiciary to preserve its commitment to integrity while adapting to technological advancements.

Popular Categories


Search the website