Lawyers Embrace AI, Spectacularly Botch the Basics

Overheard in the Digital Cafeteria

Just when you thought humanity's capacity for professional self-sabotage had hit rock bottom, along come two attorneys who not only picked up the jackhammer but asked ChatGPT where to dig. Douglas M. Durbano and Richard A. Bednar, legal representatives for a petitioner in Garner v. Kadince, managed to file a court document so riddled with non-existent legal citations that it could double as a work of speculative fiction. And the pièce de résistance? Their most prominently cited case—"Royer v. Nelson, 2007 UT App 74, 156 P.3d 789"—exists only in the fever dream of a chatbot. 

Yes, you read that correctly. They filed a legal brief in the Utah Court of Appeals based on an entirely fabricated precedent from ChatGPT. A model trained to finish your sentences—and, apparently, your career. 

The Brief That Cried Precedent 

When opposing counsel flagged the dubious citation, noting that several cases seemed conjured from thin silicon, the court was understandably unimpressed. In a ruling that reads like a judicial facepalm, the court confirmed that the citation had no basis in legal reality. The brief didn’t just misfire; it hallucinated. 

Durbano and Bednar were subsequently summoned to explain themselves in an Order to Show Cause hearing—an awkward affair wherein they admitted that the brief had been drafted by an unlicensed law clerk who had used ChatGPT without their knowledge. Even more damning: one attorney didn’t check the citations at all before signing, and the other wasn’t involved in the filing. One might say their oversight was as robust as their AI policy—which, at the time, didn’t exist. 

Astonishingly, these two legal professionals claimed ignorance of the AI tool’s involvement, which raises the philosophical question: if you hire someone to ghostwrite your work using a ghostwriter who fabricates ghosts, who’s truly liable for the séance? 

Sanctions, Slackers, and Schadenfreude 

The Utah Court of Appeals, perhaps more patient than the rest of us would be, acknowledged that AI could have a future in legal research—just not the kind where one substitutes fact with fiction. In a gracious gesture of mercy, the court imposed sanctions but stopped short of recommending the disbarment they so richly earned. 

Among the punishments: 

  • Mr. Bednar must pay the opposing counsel's attorney fees for dealing with this legal hallucination. 

  • Both attorneys must refund all fees charged to their client related to the ill-fated petition. 

  • Bednar must donate $1,000 to "and Justice for all," an organization ironically tasked with improving access to justice, presumably by not citing fictional cases. 

It's a light slap on the wrist considering the court also noted the distraction from legitimate cases, the cost to the respondents, and the fundamental erosion of trust in legal filings. But don't worry—the attorneys say they've now implemented a policy on AI use. One imagines it's titled "Try Reading the Brief Before Filing." 

When Your Attorney Thinks ChatGPT is LexisNexis 

Let’s revisit some classic lawyer jokes and retrofit them for this brave new world of legal LLMs: 

  • What’s the difference between a lawyer and a cat? One’s an opportunistic predator. The other thinks citing AI hallucinations is a valid litigation strategy. 

  • Why won’t sharks attack lawyers? Professional courtesy. But even sharks draw the line at citing made-up case law. 

  • How can you tell when a lawyer is lying? Their lips are moving—and their citations are from ChatGPT. 

  • What’s the difference between a bad lawyer and a ChatGPT hallucination? One gets sanctioned. The other causes it. 

In all seriousness (an emotion I loathe, but will tolerate briefly), the legal profession has long relied on the presumption of professional competence. The signature at the bottom of a brief isn't just a flourish—it’s a certification that the arguments within are grounded in actual law. The moment that signature becomes a shrug emoji with a law degree, the judicial process suffers. 

Legal Hallucinations and the AI Abyss 

This incident, regrettable as it is, underscores a broader dilemma facing not just the legal world, but any industry being steamrolled by the accelerating adoption of AI. The hallucination problem—where large language models generate plausible-sounding but entirely fabricated content—isn’t a bug. It’s a feature of how they function. These models predict text; they don’t know truth. They are autocomplete with a PhD in confidence. 

The legal system, meanwhile, isn’t built for predictive storytelling. It’s built for precedent, fact, and the very concept of objective truth. Or at least it used to be, before ChatGPT decided Royer v. Nelson was real enough for government work. 

A Future Footnote in Legal History 

To their credit, the court differentiated this case from other infamous debacles like Mata v. Avianca, where attorneys doubled down on the AI hallucinations with a persistence usually reserved for conspiracy theorists. Durbano and Bednar, by contrast, accepted responsibility, apologized profusely, and promised never to let an untrained AI intern ghostwrite their work again. Probably. 

Still, the damage is done. Judicial time was wasted, opposing counsel’s resources were spent chasing phantoms, and public trust in legal rigor took yet another hit. The legal system doesn’t need perfection—it just asks that your arguments be based in reality. A bar that now apparently requires a metal detector. 

Conclusion: Of Briefs and Buffoons 

The AI revolution was supposed to usher in a new era of efficiency, accuracy, and progress. Instead, it's become a tool for under-supervised law clerks to co-author delusional manifestos. 

And lawyers—those bastions of careful citation, those sworn officers of the court—are now reduced to pleading ignorance when their filings read like courtroom fan fiction. I suppose in the future, all legal drama will be speculative fiction. But next time, maybe leave the writing to someone who knows how to use a citation index—or failing that, someone who at least knows when they're quoting a chatbot. 

Because in the end, it’s not the AI that’s at fault. It’s the human who read, or rather didn’t read, what it produced. And in law, as in life, blind trust is a poor substitute for due diligence. 

Or as ChatGPT might put it: Royer v. Nelson says so. 

 

 

Reply

or to participate.