Law & Legal Advice

Judges Admit The Obvious, Concede AI Used For Hallucinated Opinions

Thanks to the investigative zeal of Senator Chuck Grassley, we now know… exactly what we knew all along: Judge Julien Neals of New Jersey and Judge Henry Wingate of Mississippi put out opinions with fake cites because of artificial intelligence hallucinations.

It’s not fair to write off the whole project as a grandstanding waste of time. The judges had previously branded their wrong and subsequently withdrawn opinions as clerical errors. That lack of transparency undermined the judges’ credibility, but both seem to have used the “clerical” excuse in a good faith effort to avoid throwing interns under the bus. According to Judge Neals, a law school intern performed legal research with ChatGPT, while Judge Wingate writes that a law clerk used Perplexity. In both cases, the judges say the opinion was still in draft form pending further review when it ended up going out the proverbial door.

The judges explain that they have procedures to avoid this in the future, including Judge Wingate unnecessarily wastefully having cases physically printed out to rule out error. This feels a lot like promising to still use the Shepardizing books after the advent of online research, but Grassley was alive when Bonnie and Clyde were still around so overkill is probably a prudent way of keeping him satisfied.

As for the Senator’s remaining questions, the answers were exactly what we expected. Did this involve confidential information going into the AI? No, there weren’t any confidential issues involved in either of these situations! Describe how the cite-checking process missed this? Because it wasn’t followed! Why did the judges pull the opinions? Because it’s stupid to leave fake cites on the docket!

“I did not want parties, including pro se litigants, to believe this draft order should be cited in future cases,” Judge Wingate writes, underselling the problem. If we’re having a serious discussion about the risks of AI, it supercharges the need for data hygiene. That docket needs to be purged of anything a future AI could scrape and turn into another mistake — one that could defeat newer guardrails by virtue of actually appearing in print in an opinion.

Unfortunately, the judges’ responses didn’t give us the one thing we might have actually found useful: an explanation of what AI products judges might be using intentionally. These errors came from staff going rogue and using consumer products, but are there products the judges are using by design and can we all learn from that experience? Both admitted that the cite-checking program involves AI technology, but that’s all we got. Maybe that’s all they’re using, but if not, it would be interesting to have learned if they’re using CoCounsel to find those cases they’re printing out or BriefCatch to aid with drafting.

I guess we’ll have to wait for the next judge AI fiasco to find out.

Judges Admit to Using AI After Made-Up Rulings Called Out [Bloomberg Law News]

Earlier: Senator Wants To Know How All These Fake Cites Ended Up In These Judicial Opinions


Headshot

Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

The post Judges Admit The Obvious, Concede AI Used For Hallucinated Opinions appeared first on Above the Law.




Source link

multiplix

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker