Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
-
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
-
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
But this is exactly what AI is being marketed toward. All of Apple's AI ads showcase dumb people who appear smart because the AI bails out their ineptitude.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
I hate people can even try to blame AI.
If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.
This is no different.
If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.
He submitted it.
Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.
Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.
And they're 100% gonna blame the AI too.
I'm honestly surprised they haven't claimed DOGE is run by AI yet
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
It's cool, they'll just have an AI source checker.
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
-
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
Different jurisdiction
-
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
violently agreeing
Typo? Do you mean vehemently or are you intending to cause harm over this opinion
-
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
Yeah he basically called the lawyer an idiot.
-
The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”
I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.
The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.
It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.
I've been saying this for ages. Even as someone who's more-or-less against the current implementation of AI, I think people who truly believe in AI should be fighting the hardest against bad uses of it. It gives AI a worse black eye every time something like this happens.
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
All you do is a quick search on the case to see if it's real or not.
They bill enough each hour to get some interns to do this all day.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
Why would one even get the idea to use AI for something like this?
"Two things are infinite: the universe and human stupidity, and I'm not sure about the universe."
-
It's cool, they'll just have an AI source checker.
I call mine a brain!
-
violently agreeing
Typo? Do you mean vehemently or are you intending to cause harm over this opinion
It’s an expression meaning you are arguing/fighting over something when both sides actually hold the same position and didn’t realize at first.
-
But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.
Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.
But I was hysterically assured that AI was going to take all our jobs?
-
Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.
I don't know if I would call it lying per-se, but yes I have seen instances of AI's being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.