Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
I would fall into the latter category. Lots of people are earnestly wrong without being liars.
-
Eh, they should file a complaint the first time, and the state bar can decide what to do about it.
"We have investigated ourselves and found nothing wrong"
-
I don't think I run on AMD or Intel, so uh, yes.
I didn't say anything about either.
-
But I was hysterically assured that AI was going to take all our jobs?
you sound like those republicans that mocked global warming when it snowed in Texas.
sure, won't take your job today. in a decade? probably.
-
"We have investigated ourselves and found nothing wrong"
The bar might get pretty ruthless for fake case citations.
-
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
Immediately there should be a contempt charge for disrespecting the Court.
-
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
-
I would fall into the latter category. Lots of people are earnestly wrong without being liars.
Me, too. But it also means when some people say "that's a lie" they're not accusing you of anything, just remarking you're wrong. And that can lead to misunderstandings.
-
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
AI is just stringing words together that are statistically likely to appear near each other. It's a giant complex statistical model but it has no awareness of truth or lying
-
You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.
That's fundamentally impossible. There's always some baseline you trust that decides what is true
-
Yeah lol, and it's trivial to show
-
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
AHS - Amazon Hoagies Services
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?
-
Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.
From the linked court document in the article:
https://storage.courtlistener.com/recap/gov.uscourts.insd.215482/gov.uscourts.insd.215482.99.0.pdf?ref=404media.co"For the reasons set forth above, the Undersigned, in his discretion, hereby
RECOMMENDS that Mr. Ramirez be personally SANCTIONED in the amount of $15,000
pursuant to Federal Rule of Civil Procedure 11 for submitting to the Court and opposing counsel,
on three separate occasions, briefs that contained citations to non-existent cases. In addition, the
Undersigned REFERS the matter of Mr. Ramirez's misconduct in this case to the Chief Judge
pursuant to Local Rule of Disciplinary Enforcement 2(a) for consideration of any further
discipline that may be appropriate"Mr. Ramirez is the dumbass lawyer that didn't check his dumbass AI. If you read above the paragraph I copied from, he gets laid into by the judge in writing to justify recommendation for sanctions and discipline. Good catch by the judge and the processes they have for this kind of thing.
-
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
"I'm sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix."
-
The bar might get pretty ruthless for fake case citations.
I would hope that gross negligence and incompetence with come with severe consequences.
-
AIs can generate false statements. It doesn't require a set of beliefs, it merely requires a set of input.
A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.
A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green.
AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it's likely that LLMs never will be.
-
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
"I'm sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix."
We are nowhere near close to AGI.
-
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
-
"We have investigated ourselves and found nothing wrong"
The state bar is not the state cops.