Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
everything is semantics.
Lying is telling a falsehood intentionally
LLM's clearly lack the prerequisite intentionality
They can’t have intent, no?
-
Me, too. But it also means when some people say "that's a lie" they're not accusing you of anything, just remarking you're wrong. And that can lead to misunderstandings.
Yep. Those people are obviously "liars," since they are using an uncommon colloquial definition.
-
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.
-
you sound like those republicans that mocked global warming when it snowed in Texas.
sure, won't take your job today. in a decade? probably.
Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.
You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.
-
They can’t have intent, no?
precisely, which is why they cannot lie, just respond with no real grasp of wether what they output is truth or falsehoods.
-
I hate people can even try to blame AI.
If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.
This is no different.
If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.
He submitted it.
Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.
Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.
And they're 100% gonna blame the AI too.
I'm honestly surprised they haven't claimed DOGE is run by AI yet
In this case he got caught because smart judge without IA. In a few years the new generation of judges will also rely on AI, so basically AI will rule the cases and own the judicial system.
-
LLMs are incapable of helping.
If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him.Sure. Look llms should be able to help, but only if there's a human to bring meaning. Llms are basically... What's that word.. I'm thinking about it at the tip of my tongue.... Word completion engines. So you think something up and it tells you what might be next. Its not how brains work but its like a calculator is to numbers...a tool. Just learn how to use it for a purpose rather than leat it barf out and answer.
-
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
Don't need something the size of AWS these days. I ran one on my PC last week. But yeah, you're right otherwise.
-
We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.
I think Ir's not the same thing, this require time, money and even some expertise to order a study on a specific question.
-
I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.
That is the problem with AI, if I have to check the output is valid then what's the damn point?
-
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
It's like when you're having a conversation on autopilot.
"Mum, can I play with my frisbee?" Sure, honey. "Mum, can I have an ice cream from the fridge?" Sure can. "Mum, can I invade Poland?" Absolutely, whatever you want.
-
You can specifically tell an ai to lie and deceive though, and it will…
Every time an AI ever does anything newsworthy just because it's obeying it's prompt.
It's like the people that claim the AI can replicate itself, yeah if you tell it to. If you don't give an AI any instructions it'll sit there and do nothing.
-
That is the problem with AI, if I have to check the output is valid then what's the damn point?
"Why don't we build another AI to fix the mistakes?"
I require $100 million funding for this though
-
Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.
You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.
all I'm saying is don't be so dismissive about AI taking jobs away from people. technology is improved daily, and all it takes is one smart asshole to make things worse for everyone else.
-
That is the problem with AI, if I have to check the output is valid then what's the damn point?
-
all I'm saying is don't be so dismissive about AI taking jobs away from people. technology is improved daily, and all it takes is one smart asshole to make things worse for everyone else.
I think it's more likely for a stupid asshole to make things worse for everyone else, which is exactly what somebody would be if they replaced human staff with defective chatbots.
-
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
It’s as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Meanwhile, the substance is slowly dissolving in water. Just slapped it in the hull and sold it to the customer.
-
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
I'm G P T and I cannot lie.
You other brothers use 'AI'
But when you file a case
To the judge's face
And say, "made mistakes? Not I!"
He'll be mad! -
So long as your own lawyer isn't doing the same, of course
I represent myself in all my cases
-
No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
I'm G P T and I cannot lie.
You other brothers use 'AI'
But when you file a case
To the judge's face
And say, "made mistakes? Not I!"
He'll be mad!