Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Scheduled Pinned Locked Moved Technology
technology
130 Posts 76 Posters 426 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ulrich@feddit.orgU [email protected]

    ...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?

    G This user is from outside of this forum
    G This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #35

    What do you believe that it is actively doing?

    Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.

    I will not answer the brain question until LLMs have brains also.

    1 Reply Last reply
    0
    • A [email protected]

      violently agreeing

      Typo? Do you mean vehemently or are you intending to cause harm over this opinion 😂

      xthexder@l.sw0.comX This user is from outside of this forum
      xthexder@l.sw0.comX This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #36

      They're synonyms in this case, so either works here

      1 Reply Last reply
      0
      • D [email protected]

        Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #37

        Eh, they should file a complaint the first time, and the state bar can decide what to do about it.

        A 1 Reply Last reply
        0
        • ulrich@feddit.orgU [email protected]

          It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #38

          Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

          That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

          ulrich@feddit.orgU 1 Reply Last reply
          0
          • G [email protected]

            I hate people can even try to blame AI.

            If I typo a couple extra zeroes because my laptop sucks, that doesn't mean I didn't fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.

            This is no different.

            If a lawyer submits something to court that is fraudulent I don't give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.

            He submitted it.

            Start yanking law licenses and these lawyers will start re-evaluating if AI means they can be fired all their human assistants and take on even more cases.

            Stop acting like this shit is autonomous tools that strip responsibility from decisions, that's literally how Elmo is about to literally dismantle our federal government.

            And they're 100% gonna blame the AI too.

            I'm honestly surprised they haven't claimed DOGE is run by AI yet

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #39

            Exactly. If you want to use AI for something, cool, but you own the results. You can try suing the AI company for bad output, but you can't use the AI as an excuse to get out of negative consequences for something you are expected to do.

            1 Reply Last reply
            0
            • G [email protected]

              It is incapable of knowledge, it is math

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #40

              Please take a strand of my hair and split it with pointless philosophical semantics.

              Our brains are chemical and electric, which is physics, which is math.

              /think

              Therefor,
              I am a product (being) of my environment (locale), experience (input), and nurturing (programming).

              /think.

              What's the difference?

              G 4 2 Replies Last reply
              0
              • ulrich@feddit.orgU [email protected]

                ...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?

                flisty@mstdn.socialF This user is from outside of this forum
                flisty@mstdn.socialF This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #41

                @Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.

                1 Reply Last reply
                0
                • M [email protected]

                  Please take a strand of my hair and split it with pointless philosophical semantics.

                  Our brains are chemical and electric, which is physics, which is math.

                  /think

                  Therefor,
                  I am a product (being) of my environment (locale), experience (input), and nurturing (programming).

                  /think.

                  What's the difference?

                  G This user is from outside of this forum
                  G This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #42

                  Ask chatgpt, I'm done arguing effective consciousness vs actual consciousness.

                  https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40

                  1 Reply Last reply
                  0
                  • bogasse@lemmy.mlB [email protected]

                    You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #43

                    We actually did. Trouble being you need experts to feed and update the thing, which works when you're watching dams (that doesn't need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.

                    bogasse@lemmy.mlB 1 Reply Last reply
                    0
                    • T [email protected]

                      “Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

                      Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.

                      It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.

                      4 This user is from outside of this forum
                      4 This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #44

                      AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

                      This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

                      J O M 3 Replies Last reply
                      0
                      • S [email protected]

                        Technically it's not, because the LLM doesn't decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

                        That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

                        ulrich@feddit.orgU This user is from outside of this forum
                        ulrich@feddit.orgU This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #45

                        it just generates an answer based on a mixture of the input and the training data, plus some randomness.

                        And is that different from the way you make decisions, fundamentally?

                        S P 2 Replies Last reply
                        0
                        • M [email protected]

                          Please take a strand of my hair and split it with pointless philosophical semantics.

                          Our brains are chemical and electric, which is physics, which is math.

                          /think

                          Therefor,
                          I am a product (being) of my environment (locale), experience (input), and nurturing (programming).

                          /think.

                          What's the difference?

                          4 This user is from outside of this forum
                          4 This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #46

                          Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.

                          Large language models are primitive, rigid, simplistic, and ultimately expensive.

                          Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.

                          M 1 Reply Last reply
                          0
                          • ulrich@feddit.orgU [email protected]

                            it just generates an answer based on a mixture of the input and the training data, plus some randomness.

                            And is that different from the way you make decisions, fundamentally?

                            S This user is from outside of this forum
                            S This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #47

                            Idk, that's still an area of active research. I versatile certainly think it's very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.

                            1 Reply Last reply
                            0
                            • ulrich@feddit.orgU [email protected]

                              ...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?

                              4 This user is from outside of this forum
                              4 This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #48

                              The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent

                              1 Reply Last reply
                              0
                              • fartswithanaccent@fedia.ioF [email protected]

                                AI can absolutely lie

                                S This user is from outside of this forum
                                S This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #49

                                a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?

                                M R B M 4 Replies Last reply
                                0
                                • S [email protected]

                                  a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?

                                  M This user is from outside of this forum
                                  M This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #50

                                  AIs can generate false statements. It doesn't require a set of beliefs, it merely requires a set of input.

                                  G 1 Reply Last reply
                                  0
                                  • ulrich@feddit.orgU [email protected]

                                    it just generates an answer based on a mixture of the input and the training data, plus some randomness.

                                    And is that different from the way you make decisions, fundamentally?

                                    P This user is from outside of this forum
                                    P This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #51

                                    I don't think I run on AMD or Intel, so uh, yes.

                                    ulrich@feddit.orgU 1 Reply Last reply
                                    0
                                    • S [email protected]

                                      a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?

                                      R This user is from outside of this forum
                                      R This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #52

                                      I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.

                                      T M D P 4 Replies Last reply
                                      0
                                      • communism@lemmy.mlC [email protected]

                                        Great news for defendants though. I hope at my next trial I look over at the prosecutor's screen and they're reading off ChatGPT lmao

                                        T This user is from outside of this forum
                                        T This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #53

                                        So long as your own lawyer isn't doing the same, of course 🙂

                                        communism@lemmy.mlC 1 Reply Last reply
                                        0
                                        • R [email protected]

                                          I've had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.

                                          T This user is from outside of this forum
                                          T This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #54

                                          I would fall into the latter category. Lots of people are earnestly wrong without being liars.

                                          R 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups