Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
AHS - Amazon Hoagies Services
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?
-
Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.
From the linked court document in the article:
https://storage.courtlistener.com/recap/gov.uscourts.insd.215482/gov.uscourts.insd.215482.99.0.pdf?ref=404media.co"For the reasons set forth above, the Undersigned, in his discretion, hereby
RECOMMENDS that Mr. Ramirez be personally SANCTIONED in the amount of $15,000
pursuant to Federal Rule of Civil Procedure 11 for submitting to the Court and opposing counsel,
on three separate occasions, briefs that contained citations to non-existent cases. In addition, the
Undersigned REFERS the matter of Mr. Ramirez's misconduct in this case to the Chief Judge
pursuant to Local Rule of Disciplinary Enforcement 2(a) for consideration of any further
discipline that may be appropriate"Mr. Ramirez is the dumbass lawyer that didn't check his dumbass AI. If you read above the paragraph I copied from, he gets laid into by the judge in writing to justify recommendation for sanctions and discipline. Good catch by the judge and the processes they have for this kind of thing.
-
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
"I'm sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix."
-
The bar might get pretty ruthless for fake case citations.
I would hope that gross negligence and incompetence with come with severe consequences.
-
AIs can generate false statements. It doesn't require a set of beliefs, it merely requires a set of input.
A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.
A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green.
AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it's likely that LLMs never will be.
-
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
"I'm sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix."
We are nowhere near close to AGI.
-
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
-
"We have investigated ourselves and found nothing wrong"
The state bar is not the state cops.
-
Yeah, I know how LLMs work, but still, if the definition of lying is giving some false absurd information knowing it is absurd you can definitely instruct an LLM to “lie”.
-
Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?
It can be prevented by people paid 400-1000 per hour spending time either writing own paperwork or paying others to actually write it.
-
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
The latter is the actual definition. Some people not knowing what words mean isnt an argument
-
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
Still not a lie still text that is statistically likely to fellow prior text produced by a model with no thought process that knows nothing
-
It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
You can't ask it about itself because it has no internal model of self and is just basing any answer on data in its training set
-
Yeah, I know how LLMs work, but still, if the definition of lying is giving some false absurd information knowing it is absurd you can definitely instruct an LLM to “lie”.
A crucial part of your statement is that it knows that it's untrue, which it is incapable of. I would agree with you if it were actually capable of understanding.
-
All you do is a quick search on the case to see if it's real or not.
They bill enough each hour to get some interns to do this all day.
All you do is a quick search on the case to see if it’s real or not.
You could easily. We have resources such as LexusNexus or Westlaw which your firm should be paying for. Even searching on Google Scholar should be enough to verify. Stay away from Casetext though, it's new and mostly AI. LN and WL also have AI integration but it's not forced, you're still capable of doing your own research.
I've been telling people this for a while, but everyone needs to treat AI like how we used to treat the wiki. It's a good secondary source that can be used to find other more reliable sources, but it should never be used as your single standalone source.
I'm not going to sugarcoat it, AI is being forced everywhere you look and it is getting a bit difficult to get away from it, but it hasn't taken over everything to the point where there is no longer any personal responsibility. People need to have some common sense and double check everything as they've been taught to do even before AI.
-
I don't know if I would call it lying per-se, but yes I have seen instances of AI's being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.
Neurosama is a fun example but we dont really know the sauce vedal coocked up.
When i say proven i mean 32 page research paper specifically looking into it.
https://arxiv.org/abs/2407.12831
They found that even a model trained specifically on honesty will lie if it has an incentive.
-
Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?
LLMs are incapable of helping.
If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him. -
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.
-
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
You can specifically tell an ai to lie and deceive though, and it will…