Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

Scheduled Pinned Locked Moved Technology
technology
80 Posts 36 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • bishma@discuss.tchncs.deB [email protected]

    While I am glad this ruling went this way, why'd she have diss Data to make it?

    To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:

    "Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

    I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

    Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

    lime@feddit.nuL This user is from outside of this forum
    lime@feddit.nuL This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #23

    is this... a chewbacca ruling?

    1 Reply Last reply
    0
    • bishma@discuss.tchncs.deB [email protected]

      While I am glad this ruling went this way, why'd she have diss Data to make it?

      To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:

      "Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

      I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

      Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

      infynis@midwest.socialI This user is from outside of this forum
      infynis@midwest.socialI This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #24

      The title makes it sound like the judge put Data and the AI on the same side of the comparison. The judge was specifically saying that, unlike in the fictional Federation setting, where Data was proven to be alive, this AI is much more like the metaphorical toaster that characters like Data and Robert Picardo's Doctor on Voyager get compared to. It is not alive, it does not create, it is just a tool that follows instructions.

      E O 2 Replies Last reply
      0
      • infynis@midwest.socialI [email protected]

        The existence of intelligence, not the quality

        G This user is from outside of this forum
        G This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #25

        What does that mean? Presumably, all animals with a brain have that quality, including humans. Can the quality be lost without destruction of the brain, ie before brain death? What about animals without a brain, like insects? What about life forms without a nervous system, like slime mold or even amoeba?

        kolanaki@pawb.socialK 1 Reply Last reply
        0
        • G [email protected]

          What does that mean? Presumably, all animals with a brain have that quality, including humans. Can the quality be lost without destruction of the brain, ie before brain death? What about animals without a brain, like insects? What about life forms without a nervous system, like slime mold or even amoeba?

          kolanaki@pawb.socialK This user is from outside of this forum
          kolanaki@pawb.socialK This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #26

          They already have precedent that a monkey can't hold a copyright after that photojournalist lost his case because he didn't snap the photo that got super popular, the monkey did. Bizarre one. The monkey can't have a copyright, so the image it took is classified as public domain.

          G M 2 Replies Last reply
          0
          • kolanaki@pawb.socialK [email protected]

            They already have precedent that a monkey can't hold a copyright after that photojournalist lost his case because he didn't snap the photo that got super popular, the monkey did. Bizarre one. The monkey can't have a copyright, so the image it took is classified as public domain.

            G This user is from outside of this forum
            G This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #27

            https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_dispute

            Yes, the PETA part of that is pretty much the same. It was an attempt to get legal personhood for a non-human being.

            you have to also be able to defend

            You're thinking of trademark law. Copyright only requires a modicum of creativity and is automatic.

            1 Reply Last reply
            0
            • P [email protected]

              What's the difference?

              P This user is from outside of this forum
              P This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #28

              Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

              LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

              If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

              The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

              And when they start hallucinating, it’s because they don’t understand how they sound and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.

              P O 2 Replies Last reply
              0
              • P [email protected]

                Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

                LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

                If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

                The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

                And when they start hallucinating, it’s because they don’t understand how they sound and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.

                P This user is from outside of this forum
                P This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #29

                So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don't think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

                So, not to be pedantic, but:

                AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

                Couldn't you say the same thing about a person? A person couldn't write something without having learned to read first. And without having read things similar to what they want to write.

                LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

                This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we'd say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren't just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

                And when they start hallucinating, it’s because they don’t understand how they sound...

                People do this too, though... It's just that LLMs do it more frequently right now.

                I guess I'm a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.

                P corkyskog@sh.itjust.worksC S 3 Replies Last reply
                0
                • bishma@discuss.tchncs.deB [email protected]

                  While I am glad this ruling went this way, why'd she have diss Data to make it?

                  To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:

                  "Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

                  I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

                  Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

                  turkalino@lemmy.yachtsT This user is from outside of this forum
                  turkalino@lemmy.yachtsT This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #30

                  I think Data would be smart enough to realize that copyright is Ferengi BS and wouldn’t want to copyright his works

                  captain_aggravated@sh.itjust.worksC tigeruppercut@lemmy.zipT 2 Replies Last reply
                  0
                  • P [email protected]

                    So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don't think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

                    So, not to be pedantic, but:

                    AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

                    Couldn't you say the same thing about a person? A person couldn't write something without having learned to read first. And without having read things similar to what they want to write.

                    LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

                    This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we'd say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren't just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

                    And when they start hallucinating, it’s because they don’t understand how they sound...

                    People do this too, though... It's just that LLMs do it more frequently right now.

                    I guess I'm a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.

                    P This user is from outside of this forum
                    P This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #31

                    I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.

                    P 1 Reply Last reply
                    0
                    • P [email protected]

                      So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don't think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

                      So, not to be pedantic, but:

                      AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

                      Couldn't you say the same thing about a person? A person couldn't write something without having learned to read first. And without having read things similar to what they want to write.

                      LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

                      This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we'd say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren't just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

                      And when they start hallucinating, it’s because they don’t understand how they sound...

                      People do this too, though... It's just that LLMs do it more frequently right now.

                      I guess I'm a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.

                      corkyskog@sh.itjust.worksC This user is from outside of this forum
                      corkyskog@sh.itjust.worksC This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #32

                      Even a human with no training can create. LLM can't.

                      P 1 Reply Last reply
                      0
                      • corkyskog@sh.itjust.worksC [email protected]

                        Even a human with no training can create. LLM can't.

                        P This user is from outside of this forum
                        P This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #33

                        The only humans with no training (in this sense) are babies. So no, they can't.

                        1 Reply Last reply
                        0
                        • P [email protected]

                          I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.

                          P This user is from outside of this forum
                          P This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #34

                          I'm a software developer, and have worked plenty with LLMs. If you don't want to address the content of my post, then fine. But "go research" is a pretty useless answer. An LLM could do better!

                          P 1 Reply Last reply
                          0
                          • P [email protected]

                            I'm a software developer, and have worked plenty with LLMs. If you don't want to address the content of my post, then fine. But "go research" is a pretty useless answer. An LLM could do better!

                            P This user is from outside of this forum
                            P This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #35

                            Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.

                            P 1 Reply Last reply
                            0
                            • P [email protected]

                              Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.

                              P This user is from outside of this forum
                              P This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #36

                              "You're wrong, but I'm just too busy to say why!"

                              Still useless.

                              P 1 Reply Last reply
                              0
                              • P [email protected]

                                "You're wrong, but I'm just too busy to say why!"

                                Still useless.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #37

                                It might surprise you to know that you’re not entitled to a free education from me. Your original query of “What’s the difference?” is what I responded to willingly. Your philosophical exploration of the nature of intelligence is not in the same ballpark.

                                I’ve done vibe coding too, enough to understand that the LLMs don’t think.

                                https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

                                P 1 Reply Last reply
                                0
                                • P [email protected]

                                  It might surprise you to know that you’re not entitled to a free education from me. Your original query of “What’s the difference?” is what I responded to willingly. Your philosophical exploration of the nature of intelligence is not in the same ballpark.

                                  I’ve done vibe coding too, enough to understand that the LLMs don’t think.

                                  https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #38

                                  Sure, I'm not entitled to anything. And I appreciate your original reply. I'm just saying that your subsequent comments have been useless and condescending. If you didn't have time to discuss further then... you could have just not replied.

                                  P 1 Reply Last reply
                                  0
                                  • P [email protected]

                                    So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don't think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

                                    So, not to be pedantic, but:

                                    AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

                                    Couldn't you say the same thing about a person? A person couldn't write something without having learned to read first. And without having read things similar to what they want to write.

                                    LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

                                    This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we'd say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren't just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

                                    And when they start hallucinating, it’s because they don’t understand how they sound...

                                    People do this too, though... It's just that LLMs do it more frequently right now.

                                    I guess I'm a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #39

                                    At least in the US, we are still too superstitious a people to ever admit that AGI could exist.

                                    We will get animal rights before we get AI rights, and I'm sure you know how animals are usually treated.

                                    P 1 Reply Last reply
                                    0
                                    • infynis@midwest.socialI [email protected]

                                      The existence of intelligence, not the quality

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #40

                                      The smartest parrots have more intelligence than the dumbest republican voters

                                      1 Reply Last reply
                                      0
                                      • S [email protected]

                                        At least in the US, we are still too superstitious a people to ever admit that AGI could exist.

                                        We will get animal rights before we get AI rights, and I'm sure you know how animals are usually treated.

                                        P This user is from outside of this forum
                                        P This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #41

                                        I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

                                        S N 2 Replies Last reply
                                        0
                                        • P [email protected]

                                          I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #42

                                          That's precisely what I meant.

                                          I'm a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don't think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that 'specialness' that humans possess.

                                          Because they don't believe human consciousness is a purely natural phenomenon, they don't believe it can be replicated by natural processes. In other words, they don't believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.

                                          And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.

                                          grrgyle@slrpnk.netG 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups