Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Scheduled Pinned Locked Moved Technology
technology
130 Posts 76 Posters 426 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • W [email protected]

    Cut the guy some slack. Instead of trying to put him in jail, bring AI front and center and try to use it in a methodical way...where does it help? How can this failure be prevented?

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #82

    LLMs are incapable of helping.
    If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him.

    W theneverfox@pawb.socialT 2 Replies Last reply
    0
    • S [email protected]

      But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

      Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

      C This user is from outside of this forum
      C This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #83

      I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.

      E 1 Reply Last reply
      0
      • R [email protected]

        I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.

        D This user is from outside of this forum
        D This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #84

        You can specifically tell an ai to lie and deceive though, and it will…

        E 1 Reply Last reply
        0
        • M [email protected]

          Still not a lie still text that is statistically likely to fellow prior text produced by a model with no thought process that knows nothing

          D This user is from outside of this forum
          D This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #85

          Lie falsehood, untrue statement, while intent is important in a human not so much in a computer which, if we are saying can not lie also can not tell the truth

          M 1 Reply Last reply
          0
          • C [email protected]

            No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #86

            So it can not tell the truth either

            F 1 Reply Last reply
            0
            • D [email protected]

              So it can not tell the truth either

              F This user is from outside of this forum
              F This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #87

              not really no.
              They are statistical that use heuristics to output what is most likely to follow the input you give it

              They are in essence mimicking their training data

              D 1 Reply Last reply
              0
              • F [email protected]

                not really no.
                They are statistical that use heuristics to output what is most likely to follow the input you give it

                They are in essence mimicking their training data

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #88

                So I think this whole thing about whether it can lie or not is just semantics then no?

                F 1 Reply Last reply
                0
                • D [email protected]

                  So I think this whole thing about whether it can lie or not is just semantics then no?

                  F This user is from outside of this forum
                  F This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #89

                  everything is semantics.

                  Lying is telling a falsehood intentionally

                  LLM's clearly lack the prerequisite intentionality

                  D 1 Reply Last reply
                  0
                  • F [email protected]

                    everything is semantics.

                    Lying is telling a falsehood intentionally

                    LLM's clearly lack the prerequisite intentionality

                    D This user is from outside of this forum
                    D This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #90

                    They can’t have intent, no?

                    F 1 Reply Last reply
                    0
                    • R [email protected]

                      Me, too. But it also means when some people say "that's a lie" they're not accusing you of anything, just remarking you're wrong. And that can lead to misunderstandings.

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #91

                      Yep. Those people are obviously "liars," since they are using an uncommon colloquial definition. 😉

                      1 Reply Last reply
                      0
                      • _haha_oh_wow_@sh.itjust.works_ [email protected]

                        Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.

                        F This user is from outside of this forum
                        F This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #92

                        I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.

                        1 Reply Last reply
                        0
                        • G [email protected]

                          you sound like those republicans that mocked global warming when it snowed in Texas.

                          sure, won't take your job today. in a decade? probably.

                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #93

                          Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.

                          You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.

                          G 1 Reply Last reply
                          0
                          • D [email protected]

                            They can’t have intent, no?

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #94

                            precisely, which is why they cannot lie, just respond with no real grasp of wether what they output is truth or falsehoods.

                            1 Reply Last reply
                            0
                            • G [email protected]

                              I hate people can even try to blame AI.

                              If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.

                              This is no different.

                              If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.

                              He submitted it.

                              Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.

                              Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.

                              And they're 100% gonna blame the AI too.

                              I'm honestly surprised they haven't claimed DOGE is run by AI yet

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #95

                              In this case he got caught because smart judge without IA. In a few years the new generation of judges will also rely on AI, so basically AI will rule the cases and own the judicial system.

                              1 Reply Last reply
                              0
                              • A [email protected]

                                LLMs are incapable of helping.
                                If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him.

                                W This user is from outside of this forum
                                W This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #96

                                Sure. Look llms should be able to help, but only if there's a human to bring meaning. Llms are basically... What's that word.. I'm thinking about it at the tip of my tongue.... Word completion engines. So you think something up and it tells you what might be next. Its not how brains work but its like a calculator is to numbers...a tool. Just learn how to use it for a purpose rather than leat it barf out and answer.

                                1 Reply Last reply
                                0
                                • 4 [email protected]

                                  AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

                                  This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

                                  J This user is from outside of this forum
                                  J This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #97

                                  Don't need something the size of AWS these days. I ran one on my PC last week. But yeah, you're right otherwise.

                                  1 Reply Last reply
                                  0
                                  • B [email protected]

                                    We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.

                                    bogasse@lemmy.mlB This user is from outside of this forum
                                    bogasse@lemmy.mlB This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #98

                                    I think Ir's not the same thing, this require time, money and even some expertise to order a study on a specific question.

                                    1 Reply Last reply
                                    0
                                    • C [email protected]

                                      I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.

                                      E This user is from outside of this forum
                                      E This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #99

                                      That is the problem with AI, if I have to check the output is valid then what's the damn point?

                                      xavier666@lemm.eeX ? L S joel_feila@lemmy.worldJ 5 Replies Last reply
                                      0
                                      • 4 [email protected]

                                        AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

                                        This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

                                        O This user is from outside of this forum
                                        O This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #100

                                        It's like when you're having a conversation on autopilot.

                                        "Mum, can I play with my frisbee?" Sure, honey. "Mum, can I have an ice cream from the fridge?" Sure can. "Mum, can I invade Poland?" Absolutely, whatever you want.

                                        joel_feila@lemmy.worldJ 1 Reply Last reply
                                        0
                                        • D [email protected]

                                          You can specifically tell an ai to lie and deceive though, and it will…

                                          E This user is from outside of this forum
                                          E This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #101

                                          Every time an AI ever does anything newsworthy just because it's obeying it's prompt.

                                          It's like the people that claim the AI can replicate itself, yeah if you tell it to. If you don't give an AI any instructions it'll sit there and do nothing.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups