Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
-
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
@Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.
-
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor,
I am a product (being) of my environment (locale), experience (input), and nurturing (programming)./think.
What's the difference?
Ask chatgpt, I'm done arguing effective consciousness vs actual consciousness.
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
-
You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.
We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.
-
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
-
Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
it just generates an answer based on a mixture of the input and the training data, plus some randomness.
And is that different from the way you make decisions, fundamentally?
-
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor,
I am a product (being) of my environment (locale), experience (input), and nurturing (programming)./think.
What's the difference?
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
-
it just generates an answer based on a mixture of the input and the training data, plus some randomness.
And is that different from the way you make decisions, fundamentally?
Idk, that's still an area of active research. I versatile certainly think it's very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.
-
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent
-
AI can absolutely lie
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
-
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
AIs can generate false statements. It doesn't require a set of beliefs, it merely requires a set of input.
-
it just generates an answer based on a mixture of the input and the training data, plus some randomness.
And is that different from the way you make decisions, fundamentally?
I don't think I run on AMD or Intel, so uh, yes.
-
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
-
Great news for defendants though. I hope at my next trial I look over at the prosecutor's screen and they're reading off ChatGPT lmao
So long as your own lawyer isn't doing the same, of course
-
I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.
I would fall into the latter category. Lots of people are earnestly wrong without being liars.
-
Eh, they should file a complaint the first time, and the state bar can decide what to do about it.
"We have investigated ourselves and found nothing wrong"
-
I don't think I run on AMD or Intel, so uh, yes.
I didn't say anything about either.
-
But I was hysterically assured that AI was going to take all our jobs?
you sound like those republicans that mocked global warming when it snowed in Texas.
sure, won't take your job today. in a decade? probably.
-
"We have investigated ourselves and found nothing wrong"
The bar might get pretty ruthless for fake case citations.
-
Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.
Immediately there should be a contempt charge for disrespecting the Court.
-
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.