Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?
LLMs are incapable of helping.
If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him. -
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.
-
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
You can specifically tell an ai to lie and deceive though, and it will…
-
Still not a lie still text that is statistically likely to fellow prior text produced by a model with no thought process that knows nothing
Lie falsehood, untrue statement, while intent is important in a human not so much in a computer which, if we are saying can not lie also can not tell the truth
-
No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
So it can not tell the truth either
-
So it can not tell the truth either
not really no.
They are statistical that use heuristics to output what is most likely to follow the input you give itThey are in essence mimicking their training data
-
not really no.
They are statistical that use heuristics to output what is most likely to follow the input you give itThey are in essence mimicking their training data
So I think this whole thing about whether it can lie or not is just semantics then no?
-
So I think this whole thing about whether it can lie or not is just semantics then no?
everything is semantics.
Lying is telling a falsehood intentionally
LLM's clearly lack the prerequisite intentionality
-
everything is semantics.
Lying is telling a falsehood intentionally
LLM's clearly lack the prerequisite intentionality
They can’t have intent, no?
-
Me, too. But it also means when some people say "that's a lie" they're not accusing you of anything, just remarking you're wrong. And that can lead to misunderstandings.
Yep. Those people are obviously "liars," since they are using an uncommon colloquial definition.
-
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.
-
you sound like those republicans that mocked global warming when it snowed in Texas.
sure, won't take your job today. in a decade? probably.
Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.
You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.
-
They can’t have intent, no?
precisely, which is why they cannot lie, just respond with no real grasp of wether what they output is truth or falsehoods.
-
I hate people can even try to blame AI.
If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.
This is no different.
If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.
He submitted it.
Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.
Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.
And they're 100% gonna blame the AI too.
I'm honestly surprised they haven't claimed DOGE is run by AI yet
In this case he got caught because smart judge without IA. In a few years the new generation of judges will also rely on AI, so basically AI will rule the cases and own the judicial system.
-
LLMs are incapable of helping.
If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him.Sure. Look llms should be able to help, but only if there's a human to bring meaning. Llms are basically... What's that word.. I'm thinking about it at the tip of my tongue.... Word completion engines. So you think something up and it tells you what might be next. Its not how brains work but its like a calculator is to numbers...a tool. Just learn how to use it for a purpose rather than leat it barf out and answer.
-
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
Don't need something the size of AWS these days. I ran one on my PC last week. But yeah, you're right otherwise.
-
We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.
I think Ir's not the same thing, this require time, money and even some expertise to order a study on a specific question.
-
I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.
That is the problem with AI, if I have to check the output is valid then what's the damn point?
-
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
It's like when you're having a conversation on autopilot.
"Mum, can I play with my frisbee?" Sure, honey. "Mum, can I have an ice cream from the fridge?" Sure can. "Mum, can I invade Poland?" Absolutely, whatever you want.
-
You can specifically tell an ai to lie and deceive though, and it will…
Every time an AI ever does anything newsworthy just because it's obeying it's prompt.
It's like the people that claim the AI can replicate itself, yeah if you tell it to. If you don't give an AI any instructions it'll sit there and do nothing.