Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Scheduled Pinned Locked Moved Technology
technology
130 Posts 76 Posters 426 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D [email protected]

    So it can not tell the truth either

    F This user is from outside of this forum
    F This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #87

    not really no.
    They are statistical that use heuristics to output what is most likely to follow the input you give it

    They are in essence mimicking their training data

    D 1 Reply Last reply
    0
    • F [email protected]

      not really no.
      They are statistical that use heuristics to output what is most likely to follow the input you give it

      They are in essence mimicking their training data

      D This user is from outside of this forum
      D This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #88

      So I think this whole thing about whether it can lie or not is just semantics then no?

      F 1 Reply Last reply
      0
      • D [email protected]

        So I think this whole thing about whether it can lie or not is just semantics then no?

        F This user is from outside of this forum
        F This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #89

        everything is semantics.

        Lying is telling a falsehood intentionally

        LLM's clearly lack the prerequisite intentionality

        D 1 Reply Last reply
        0
        • F [email protected]

          everything is semantics.

          Lying is telling a falsehood intentionally

          LLM's clearly lack the prerequisite intentionality

          D This user is from outside of this forum
          D This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #90

          They can’t have intent, no?

          F 1 Reply Last reply
          0
          • R [email protected]

            Me, too. But it also means when some people say "that's a lie" they're not accusing you of anything, just remarking you're wrong. And that can lead to misunderstandings.

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #91

            Yep. Those people are obviously "liars," since they are using an uncommon colloquial definition. 😉

            1 Reply Last reply
            0
            • _haha_oh_wow_@sh.itjust.works_ [email protected]

              Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.

              F This user is from outside of this forum
              F This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #92

              I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.

              1 Reply Last reply
              0
              • G [email protected]

                you sound like those republicans that mocked global warming when it snowed in Texas.

                sure, won't take your job today. in a decade? probably.

                F This user is from outside of this forum
                F This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #93

                Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.

                You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.

                G 1 Reply Last reply
                0
                • D [email protected]

                  They can’t have intent, no?

                  F This user is from outside of this forum
                  F This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #94

                  precisely, which is why they cannot lie, just respond with no real grasp of wether what they output is truth or falsehoods.

                  1 Reply Last reply
                  0
                  • G [email protected]

                    I hate people can even try to blame AI.

                    If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.

                    This is no different.

                    If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.

                    He submitted it.

                    Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.

                    Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.

                    And they're 100% gonna blame the AI too.

                    I'm honestly surprised they haven't claimed DOGE is run by AI yet

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #95

                    In this case he got caught because smart judge without IA. In a few years the new generation of judges will also rely on AI, so basically AI will rule the cases and own the judicial system.

                    1 Reply Last reply
                    0
                    • A [email protected]

                      LLMs are incapable of helping.
                      If he cannot find time to construct his own legal briefs, maybe he should use part of his money to hire an AGI (otherwise known as a human) to help him.

                      W This user is from outside of this forum
                      W This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #96

                      Sure. Look llms should be able to help, but only if there's a human to bring meaning. Llms are basically... What's that word.. I'm thinking about it at the tip of my tongue.... Word completion engines. So you think something up and it tells you what might be next. Its not how brains work but its like a calculator is to numbers...a tool. Just learn how to use it for a purpose rather than leat it barf out and answer.

                      1 Reply Last reply
                      0
                      • 4 [email protected]

                        AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

                        This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

                        J This user is from outside of this forum
                        J This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #97

                        Don't need something the size of AWS these days. I ran one on my PC last week. But yeah, you're right otherwise.

                        1 Reply Last reply
                        0
                        • B [email protected]

                          We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.

                          bogasse@lemmy.mlB This user is from outside of this forum
                          bogasse@lemmy.mlB This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #98

                          I think Ir's not the same thing, this require time, money and even some expertise to order a study on a specific question.

                          1 Reply Last reply
                          0
                          • C [email protected]

                            I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.

                            E This user is from outside of this forum
                            E This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #99

                            That is the problem with AI, if I have to check the output is valid then what's the damn point?

                            xavier666@lemm.eeX ? L S joel_feila@lemmy.worldJ 5 Replies Last reply
                            0
                            • 4 [email protected]

                              AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

                              This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

                              O This user is from outside of this forum
                              O This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #100

                              It's like when you're having a conversation on autopilot.

                              "Mum, can I play with my frisbee?" Sure, honey. "Mum, can I have an ice cream from the fridge?" Sure can. "Mum, can I invade Poland?" Absolutely, whatever you want.

                              joel_feila@lemmy.worldJ 1 Reply Last reply
                              0
                              • D [email protected]

                                You can specifically tell an ai to lie and deceive though, and it will…

                                E This user is from outside of this forum
                                E This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #101

                                Every time an AI ever does anything newsworthy just because it's obeying it's prompt.

                                It's like the people that claim the AI can replicate itself, yeah if you tell it to. If you don't give an AI any instructions it'll sit there and do nothing.

                                1 Reply Last reply
                                0
                                • E [email protected]

                                  That is the problem with AI, if I have to check the output is valid then what's the damn point?

                                  xavier666@lemm.eeX This user is from outside of this forum
                                  xavier666@lemm.eeX This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #102

                                  "Why don't we build another AI to fix the mistakes?"

                                  I require $100 million funding for this though

                                  1 Reply Last reply
                                  0
                                  • F [email protected]

                                    Going off the math and charts that OpenAI and DeepMind both published before the AI boom which correctly guessed performance to cost ratios: we've reached the peak of current models. AI is bust, mate. In particular, Deepmind concluded with infinite resources the models in use would never reach accurate human language capabilities.

                                    You can say stuff like "they'll just make new models, then!" but it doesn't really work like that, the current models aren't even new in the slightest it's just the first time we've gotten people together to feed them power and data like logs into a woodchipper.

                                    G This user is from outside of this forum
                                    G This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #103

                                    all I'm saying is don't be so dismissive about AI taking jobs away from people. technology is improved daily, and all it takes is one smart asshole to make things worse for everyone else.

                                    F 1 Reply Last reply
                                    0
                                    • E [email protected]

                                      That is the problem with AI, if I have to check the output is valid then what's the damn point?

                                      ? Offline
                                      ? Offline
                                      Guest
                                      wrote on last edited by
                                      #104

                                      You can get ideas, different approaches and concepts. Sort of rubber ducky thing in my case. It won't solve the problem for me, but might hint me in the right direction.

                                      1 Reply Last reply
                                      0
                                      • G [email protected]

                                        all I'm saying is don't be so dismissive about AI taking jobs away from people. technology is improved daily, and all it takes is one smart asshole to make things worse for everyone else.

                                        F This user is from outside of this forum
                                        F This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #105

                                        I think it's more likely for a stupid asshole to make things worse for everyone else, which is exactly what somebody would be if they replaced human staff with defective chatbots.

                                        1 Reply Last reply
                                        0
                                        • tal@lemmy.todayT [email protected]

                                          The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

                                          I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.

                                          The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.

                                          It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.

                                          F This user is from outside of this forum
                                          F This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #106

                                          It’s as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Meanwhile, the substance is slowly dissolving in water. Just slapped it in the hull and sold it to the customer.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups