Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
-
No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It's very easy to pile a ton of them and it's much harder to attack you about any of them because they're much less consequent.
So in that view, a bullshitter doesn't give any shit about the truth, while a liar is a bit more "noble". 0
-
A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It's very easy to pile a ton of them and it's much harder to attack you about any of them because they're much less consequent.
So in that view, a bullshitter doesn't give any shit about the truth, while a liar is a bit more "noble". 0
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it's very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.
-
All you do is a quick search on the case to see if it's real or not.
They bill enough each hour to get some interns to do this all day.
I'm pretty sure that just doing "quick searches" is exactly how he ended up with AI answers to begin with.
-
I'm pretty sure that just doing "quick searches" is exactly how he ended up with AI answers to begin with.
I don't think PACER or the state equivalents use AI summary tools yet.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Great news for defendants though. I hope at my next trial I look over at the prosecutor's screen and they're reading off ChatGPT lmao
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
AI can absolutely lie
-
It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
"Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.
-
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
"Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
-
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
"Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.
I strongly worry that humans really weren't ready for this "good enough" product to be their first "real" interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It's obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you're several comments deep into a genuinely actually no lie completely pointless debate against spooky math.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.
-
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
It is incapable of knowledge, it is math
-
It is incapable of knowledge, it is math
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
-
You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.
We did, a long time ago. It's called an encyclopedia.
If humans can't be trusted to only provide facts, how can we be trusted to make a machine that only provides facts? How do we deal with disputed truths? Grey areas?
-
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.
-
violently agreeing
Typo? Do you mean vehemently or are you intending to cause harm over this opinion
They're synonyms in this case, so either works here
-
Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.
Eh, they should file a complaint the first time, and the state bar can decide what to do about it.
-
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
-
I hate people can even try to blame AI.
If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.
This is no different.
If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.
He submitted it.
Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.
Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.
And they're 100% gonna blame the AI too.
I'm honestly surprised they haven't claimed DOGE is run by AI yet
Exactly. If you want to use AI for something, cool, but you own the results. You can try suing the AI company for bad output, but you can't use the AI as an excuse to get out of negative consequences for something you are expected to do.
-
It is incapable of knowledge, it is math
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor,
I am a product (being) of my environment (locale), experience (input), and nurturing (programming)./think.
What's the difference?