Hello Fellow Legal Eagles, Counselors of Chaos, Defenders of Deadlines,
Your weekly dose of legal absurdity, courtroom chaos, and mandatory fun, now with extra billable hours.
Let's get into it! ⚖️😂
Buckle up, kiddos, because we need to talk about the absolute catastrophe that is lawyers using ChatGPT to write their briefs and then acting shocked (SHOCKED) when the AI makes up fake cases.
404 Media just analyzed dozens of court cases where lawyers got caught submitting AI-generated garbage, and the excuses are spectacular. We're talking:
A California appeals court just hit a lawyer with a record $10,000 fine for submitting a brief where "nearly all of the legal quotations" were fabricated by ChatGPT, Claude, Gemini, and Grok. The court published the opinion "as a warning."
There are now 410+ documented cases worldwide (269 in the U.S.) of lawyers being sanctioned for AI hallucinations. And LexisNexis exec predicts it's "only a matter of time" before attorneys start losing their licenses.
Welcome to the future of legal practice, where associates would rather trust ChatGPT than do actual research, and partners blame everyone except themselves when it goes sideways.
Let's appreciate the creativity lawyers deployed when caught submitting fake cases:
Indiana lawyer: Blamed the three-day deadline and his "busy schedule," so he asked his paralegal (who "once was, but is not currently, a licensed attorney") to draft it. Didn't have time to review.
Florida lawyer: "Handling this appeal pro bono" and "lacked experience in appellate law," so he hired "an independent contractor paralegal" at his own expense. "Did not review the authority cited within the draft answer brief prior to filing."
Arizona lawyer: "Neither I nor the supervising staff attorney knowingly submitted false or non-existent citations. The brief writer in question was experienced and credentialed, and we relied on her professionalism."
Pattern detected: When AI fucks up your brief, blame the person who makes the least money in your office.
New York lawyer: Had "a serious health challenge since the beginning of this year which has proven very persistent which most of the time leaves me internally cold, and unable to maintain a steady body temperature which causes me to be dizzy and experience bouts of vertigo and confusion."
Then (AND THIS IS MY FAVORITE PART) after finding the citation errors, he "conducted a review of his office computer system and found out that his system was 'affected by malware and unauthorized remote access.'"
He compared his April 9 draft to the April 21 filing and "was shocked that the cases I cited were substantially different."
Translation: The malware apparently changed my citations between drafts. The malware specifically targeted my legal citations. It's the malware's fault.
This is the legal equivalent of "someone must have hacked my account" when you get caught tweeting something stupid.
New York lawyer: "The Opposition was drafted by a clerk. The clerk reports that she used Google for research... I reviewed the draft Opposition but did not check the citations. I take full responsibility for failing to check the citations... I believe the main reason for my failure is due to the recent death of my spouse."
This is genuinely tragic, and losing a spouse absolutely affects your work. But you still filed a brief with fake citations. "I was grieving" explains the oversight but doesn't excuse it.
Also notable: even while citing grief, still managed to blame the clerk first.
California lawyer filed an AI-generated petition for Tim Cook's resignation three times and called it "a legal experiment."
His explanation is magnificent:
"No human ever authored the Petition for Tim Cook's resignation, nor did any human spend more than about fifteen minutes on it. I am quite weary of Artificial Intelligence, as I am weary of Big Tech... We asked the most powerful commercially available AI, ChatGPT o3 Pro 'Deep Research' mode, a simple question: 'Did Judge Gonzales Rogers' rebuke of Tim Cook's Epic conduct create a legally grounded impetus for his termination as CEO, and if so, write a petition explaining such basis'... Ten minutes later, the Petition was created by AI."
He then "made several minor corrections" and filed it "to promote conversation on the complex implications herein."
Translation: "I let ChatGPT write court filings to see what would happen and called it research when I got caught."
This is the legal equivalent of "it was just a prank, bro."
Michigan lawyers: Assembling 1,500 pages of exhibits due at midnight, their "computer system experienced a sudden and unexplainable loss of internet connection and loss of connection with the ECF system."
"In the midst of experiencing these technical issues, we erred in our standard verification process and missed identifying incorrect text AI put in parentheticals."
Sure. The internet died at 11:45 PM on your filing deadline, and that's why you didn't catch the fake citations. Totally believable.
DC lawyers: Used multiple tools including "Pages with Grammarly and ProWritingAid" and at some point used ProWritingAid "to replace all the square brackets with parentheses."
"Through inadvertence or oversight, I was unaware quotes had been added or that I had included a case that did not actually exist."
Grammarly added fake case citations. That's the defense. A grammar-checking tool invented legal cases.
Also spent "all day with IT trying to figure out what went wrong."
Texas lawyer: This one has EVERYTHING:
He then instituted a "strict policy prohibiting his staff from using artificial intelligence without exception."
AND submitted a $480 invoice from "Mainframe Computers" for printer repairs and "fixes with email and monitors and default fonts."
This reads like a parody of legal excuses. Everything went wrong simultaneously, none of it was his fault, and here's a receipt to prove his computer had problems.
Smarter drafting, faster docs, and fewer late nights — powered by MyCase AI
From AI-assisted writing to intelligent document automation, MyCase helps lawyers cut through the noise and focus on clients, not paperwork. Greatness doesn't come from billable hours wasted on formatting, it starts with MyCase IQ.
START FREE TRIALDamien Charlotin's database tracks 410+ cases worldwide, including 269 in the U.S.
In the last week alone, he documented 11 new cases.
404 Media reports: "While working on this article, it became nearly impossible to keep up with new cases of lawyers being sanctioned for using AI."
This isn't a few isolated incidents. This is an epidemic of lawyers using AI they don't understand to do work they should be doing themselves, then acting shocked when it generates fake citations.
The excuses are bullshit, but they reveal real problems:
Lawyers are under "great pressure to use AI" to be "more productive and take on more casework." BigLaw demands efficiency. AI promises shortcuts.
Associates see ChatGPT as a way to crank out first drafts faster. Partners don't ask questions as long as billable hours stay high.
One expert noted lawyers "delegate tasks to teams, oftentimes don't read all of the material collected by coworkers, and copy and paste strings of citations without proper fact-checking."
AI isn't creating this problem. It's exposing a problem that already existed. Lawyers have been copy-pasting citations and not verifying them for decades. AI just made it obvious.
As one lawyer told 404 Media: "Nearly every lawyer is using AI to some degree; it's just a problem if they get caught."
Firms are rolling out AI tools without training. Associates are using ChatGPT because everyone else is. Nobody fully understands how LLMs work or their limitations.
The Texas lawyer who "still uses a dictation phone" and has "limited technological capabilities" is somehow supervising staff using AI tools.
Westlaw and LexisNexis have AI features. ChatGPT, Claude, Gemini, Grok - all easily accessible. Countless startups selling "AI legal research" tools.
The barrier to entry is zero. The understanding required is high. That's a disaster waiting to happen.
LexisNexis CEO Sean Fitzpatrick told Fortune: "I think it's only a matter of time before we do see attorneys losing their licenses over this."
His pitch? Stop using "open-source" tools like ChatGPT. Use LexisNexis's proprietary AI instead, which pulls from a "walled garden of content."
Fitzpatrick isn't wrong about the risks. Open-source LLMs trained on the internet absolutely make shit up. And lawyers using them for "real legal work" are playing Russian roulette with their licenses.
Tired: Clicking refresh 47 times on a hostile party's website
Wired: Letting Browse AI track, scrape, and timestamp it for you
Browse AI is what happens when legal tech stops pretending and starts performing.
Let the robots handle the refresh button.
TRY BOT NOWHere's a nightmare scenario experts raised:
When you put client information into ChatGPT or other open-source LLMs, you're potentially:
Frank Emmert (legal AI expert): "You're not gonna find the full contract, but you're going to find enough information out there if they have been uploading these contracts... Potentially you could find client names... or at least information that makes the client identifiable."
If uploaded without permission, this becomes publicly available information.
How many of the lawyers making excuses about using AI bothered to get client consent before feeding privileged information into ChatGPT?
My guess: approximately zero.
One sanctioned lawyer told a journalist:
"Nearly every lawyer is using AI to some degree; it's just a problem if they get caught. The judges here have seen it extensively. I know for a fact other attorneys have been sanctioned. It's public, but unless you know what to search for, you're not going to find it anywhere. It's just that for some stupid reason, my matter caught the attention of a news outlet. It doesn't help with business."
This is the actual truth:
The system is selecting for lawyers who are unlucky or sloppy enough to get noticed, not necessarily the ones using AI most irresponsibly.
If the legal profession was serious about AI, they would:
Instead, we're getting:
Here's the beautiful irony: lawyers are using AI to save time and increase productivity.
But when they get sanctioned, they spend:
The Texas lawyer submitted a $480 IT invoice to explain his email problems. How many billable hours did he spend on that response? How much did getting sanctioned cost compared to just doing the research properly?
Using AI to save 2 hours on research, then spending 50 hours explaining why you submitted fake citations, is not actually efficiency.
Nearly every lawyer is apparently using AI to some degree. The ones getting sanctioned are just unlucky enough to get caught or sloppy enough to submit obviously fake citations.
And somewhere, a paralegal is reading these articles thinking "I knew I'd get blamed for this" while their supervising attorney drafts another ChatGPT brief without checking the citations.
Welcome to legal practice in 2025, where "my paralegal used AI without telling me" is the new "dog ate my homework," and federal judges are publishing warnings that maybe, just maybe, you should verify that cases actually exist before citing them.
The profession that built its reputation on precision, accuracy, and attention to detail has decided to outsource brief-writing to chatbots and then act surprised when those chatbots make shit up.
This is fine. Everything is fine. The legal system is definitely not being destabilized by lawyers who don't understand the difference between ChatGPT and Westlaw.
Think you'd never get sanctioned for AI hallucinations? Think again.
If you've read this far, you already know how not to use ChatGPT in legal practice. But here's the fun part: we've turned all this AI chaos into a CLE course that's actually... entertaining.
Watch our 20-minute demo: "Lawyer, Interrupted: Ethics, AI, and the Future of Billable Oopsies"
It's short, snarky, and teaches you how NOT to end up on a judge's published opinion as a warning to others.
The full version will be a 60-minute ethics CLE — real Model Rule compliance + our original comic duo Oscar & Bruno reenacting what not to do.
Want to turn CLE from a dreadful chore into infotainment?
Tell your CLE provider to add us to their catalog. Forward them this link.
Or just threaten to send them a ChatGPT-generated motion to compel.
Objection? Hit reply and argue your case!
Your inbox is full of legal briefs and client rants. Let Legal LOLz be the newsletter you actually look forward to reading.
P.S. This newsletter is 100% billable if you read it on the clock. Just saying.
Walter, Editor-in-Law
(Still not disbarred. Yet.)
© 2025 All rights reserved. Sharing is cool. Stealing? That's a tort, not a tribute.