Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Scheduled Pinned Locked Moved Technology
technology
130 Posts 76 Posters 426 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • _haha_oh_wow_@sh.itjust.works_ [email protected]

    Haven't people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.

    O This user is from outside of this forum
    O This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #9

    Different jurisdiction

    1 Reply Last reply
    0
    • tal@lemmy.todayT [email protected]

      The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

      I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.

      The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.

      It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.

      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #10

      violently agreeing

      Typo? Do you mean vehemently or are you intending to cause harm over this opinion 😂

      H xthexder@l.sw0.comX 2 Replies Last reply
      0
      • tal@lemmy.todayT [email protected]

        The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

        I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.

        The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.

        It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.

        yuki2501@lemmy.worldY This user is from outside of this forum
        yuki2501@lemmy.worldY This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #11

        Yeah he basically called the lawyer an idiot. 😆

        1 Reply Last reply
        0
        • tal@lemmy.todayT [email protected]

          The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

          I won't even go that far. I can very much believe that you can build an AI capable of doing perfectly-reasonable legal arguments. Might be using technology that looks a lot different from what we have today, but whatever.

          The problem is that the lawyer just started using a new technology to produce material that he didn't even validate without determining whether-or-not it actually worked for what he wanted to do in its current state, and where there was clearly available material showing that it was not in that state.

          It's as if a shipbuilder started using random new substance in its ship hull without actually conducting serious tests on it or even looking at consensus in the shipbuilding industry as to whether the material could fill that role. Just slapped it in the hull and sold it to the customer.

          I This user is from outside of this forum
          I This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #12

          I've been saying this for ages. Even as someone who's more-or-less against the current implementation of AI, I think people who truly believe in AI should be fighting the hardest against bad uses of it. It gives AI a worse black eye every time something like this happens.

          1 Reply Last reply
          0
          • T [email protected]

            “Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

            Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.

            It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.

            W This user is from outside of this forum
            W This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #13

            Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.

            moose@moose.bestM 1 Reply Last reply
            0
            • S [email protected]

              But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

              Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

              S This user is from outside of this forum
              S This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #14

              All you do is a quick search on the case to see if it's real or not.

              They bill enough each hour to get some interns to do this all day.

              chozo@fedia.ioC vinnydacat@lemmy.worldV 2 Replies Last reply
              0
              • S [email protected]

                But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

                Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

                S This user is from outside of this forum
                S This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #15

                Why would one even get the idea to use AI for something like this?

                "Two things are infinite: the universe and human stupidity, and I'm not sure about the universe."

                1 Reply Last reply
                0
                • jordanlund@lemmy.worldJ [email protected]

                  It's cool, they'll just have an AI source checker. 🙂

                  T This user is from outside of this forum
                  T This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #16

                  I call mine a brain! 😉

                  1 Reply Last reply
                  0
                  • A [email protected]

                    violently agreeing

                    Typo? Do you mean vehemently or are you intending to cause harm over this opinion 😂

                    H This user is from outside of this forum
                    H This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #17

                    It’s an expression meaning you are arguing/fighting over something when both sides actually hold the same position and didn’t realize at first.

                    1 Reply Last reply
                    0
                    • S [email protected]

                      But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

                      Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

                      S This user is from outside of this forum
                      S This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #18

                      But I was hysterically assured that AI was going to take all our jobs?

                      G 1 Reply Last reply
                      0
                      • W [email protected]

                        Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.

                        moose@moose.bestM This user is from outside of this forum
                        moose@moose.bestM This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #19

                        I don't know if I would call it lying per-se, but yes I have seen instances of AI's being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.

                        W 1 Reply Last reply
                        0
                        • T [email protected]

                          “Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

                          Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.

                          It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.

                          bogasse@lemmy.mlB This user is from outside of this forum
                          bogasse@lemmy.mlB This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #20

                          You don't need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.

                          G B morrowind@lemmy.mlM 3 Replies Last reply
                          0
                          • T [email protected]

                            “Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

                            Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.

                            It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.

                            ulrich@feddit.orgU This user is from outside of this forum
                            ulrich@feddit.orgU This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #21

                            It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.

                            ryven@lemmy.dbzer0.comR M 2 Replies Last reply
                            0
                            • C [email protected]

                              No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.

                              bogasse@lemmy.mlB This user is from outside of this forum
                              bogasse@lemmy.mlB This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #22

                              A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.

                              Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.

                              Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It's very easy to pile a ton of them and it's much harder to attack you about any of them because they're much less consequent.

                              So in that view, a bullshitter doesn't give any shit about the truth, while a liar is a bit more "noble". 0

                              G 1 Reply Last reply
                              0
                              • bogasse@lemmy.mlB [email protected]

                                A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.

                                Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.

                                Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It's very easy to pile a ton of them and it's much harder to attack you about any of them because they're much less consequent.

                                So in that view, a bullshitter doesn't give any shit about the truth, while a liar is a bit more "noble". 0

                                G This user is from outside of this forum
                                G This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #23

                                I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it's very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.

                                1 Reply Last reply
                                0
                                • S [email protected]

                                  All you do is a quick search on the case to see if it's real or not.

                                  They bill enough each hour to get some interns to do this all day.

                                  chozo@fedia.ioC This user is from outside of this forum
                                  chozo@fedia.ioC This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #24

                                  I'm pretty sure that just doing "quick searches" is exactly how he ended up with AI answers to begin with.

                                  S 1 Reply Last reply
                                  0
                                  • chozo@fedia.ioC [email protected]

                                    I'm pretty sure that just doing "quick searches" is exactly how he ended up with AI answers to begin with.

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #25

                                    I don't think PACER or the state equivalents use AI summary tools yet.

                                    1 Reply Last reply
                                    0
                                    • S [email protected]

                                      But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

                                      Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

                                      communism@lemmy.mlC This user is from outside of this forum
                                      communism@lemmy.mlC This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #26

                                      Great news for defendants though. I hope at my next trial I look over at the prosecutor's screen and they're reading off ChatGPT lmao

                                      T 1 Reply Last reply
                                      0
                                      • T [email protected]

                                        “Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

                                        Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.

                                        It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.

                                        fartswithanaccent@fedia.ioF This user is from outside of this forum
                                        fartswithanaccent@fedia.ioF This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #27

                                        AI can absolutely lie

                                        S 1 Reply Last reply
                                        0
                                        • ulrich@feddit.orgU [email protected]

                                          It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.

                                          ryven@lemmy.dbzer0.comR This user is from outside of this forum
                                          ryven@lemmy.dbzer0.comR This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #28

                                          Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.

                                          "Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.

                                          ulrich@feddit.orgU G 2 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups