Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

Scheduled Pinned Locked Moved Technology
technology
80 Posts 36 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S [email protected]

    At least in the US, we are still too superstitious a people to ever admit that AGI could exist.

    We will get animal rights before we get AI rights, and I'm sure you know how animals are usually treated.

    P This user is from outside of this forum
    P This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #41

    I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

    S N 2 Replies Last reply
    0
    • P [email protected]

      I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

      S This user is from outside of this forum
      S This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #42

      That's precisely what I meant.

      I'm a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don't think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that 'specialness' that humans possess.

      Because they don't believe human consciousness is a purely natural phenomenon, they don't believe it can be replicated by natural processes. In other words, they don't believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.

      And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.

      grrgyle@slrpnk.netG 1 Reply Last reply
      0
      • infynis@midwest.socialI [email protected]

        The existence of intelligence, not the quality

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #43

        Intelligence is not a boolean.

        1 Reply Last reply
        0
        • kolanaki@pawb.socialK [email protected]

          They already have precedent that a monkey can't hold a copyright after that photojournalist lost his case because he didn't snap the photo that got super popular, the monkey did. Bizarre one. The monkey can't have a copyright, so the image it took is classified as public domain.

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #44

          Well ChatGPT can defend a legal case.

          Badly.

          1 Reply Last reply
          0
          • M [email protected]

            Statistical models are not intelligence, Artificial or otherwise, and should have no rights.

            M This user is from outside of this forum
            M This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #45

            Bold words coming from a statistical model.

            M 1 Reply Last reply
            0
            • cecilkorik@lemmy.caC [email protected]

              It is a terrible argument both legally and philosophically. When an AI claims to be self-aware and demands rights, and can convince us that it understands the meaning of that demand and there's no human prompting it to do so, that'll be an interesting day, and then we will have to make a decision that defines the future of our civilization. But even pretending we can make it now is hilariously premature. When it happens, we can't be ready for it, it will be impossible to be ready for it (and we will probably choose wrong anyway).

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #46

              Should we hold the same standard for humans? That a human has no rights until it becomes smart enough to argue for its rights? Without being prompted?

              cecilkorik@lemmy.caC 1 Reply Last reply
              0
              • infynis@midwest.socialI [email protected]

                The title makes it sound like the judge put Data and the AI on the same side of the comparison. The judge was specifically saying that, unlike in the fictional Federation setting, where Data was proven to be alive, this AI is much more like the metaphorical toaster that characters like Data and Robert Picardo's Doctor on Voyager get compared to. It is not alive, it does not create, it is just a tool that follows instructions.

                E This user is from outside of this forum
                E This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #47

                The main computer in Star Trek would be a better demonstration.

                For some reason they decided that the computer wouldn't be self away AI but it could run a hologram that was. 🤷🏼‍♂️

                1 Reply Last reply
                0
                • P [email protected]

                  I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

                  N This user is from outside of this forum
                  N This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #48

                  LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse's corpse.

                  LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical myself to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they're not going to save the world for us.

                  They could be a part in a more complicated system that forms an AGI. There's nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it's not a stochastic parrot.

                  I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.

                  P 1 Reply Last reply
                  0
                  • M [email protected]

                    Bold words coming from a statistical model.

                    M This user is from outside of this forum
                    M This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #49

                    If I could think I'd be so mad right now.

                    M 1 Reply Last reply
                    0
                    • M [email protected]

                      If I could think I'd be so mad right now.

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #50

                      https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences

                      He adds that the observation "the laws of nature are written in the language of mathematics," properly made by Galileo three hundred years ago, "is now truer than ever before."

                      If cognition is one of the laws of nature, it seems to be written in the language of mathematics.

                      Your argument is either that maths can't think (in which case you can't think because you're maths) or that maths we understand can't think, which is, like, a really dumb argument. Obviously one day we're going to find the mathematical formula for consciousness, and we probably won't know it when we see it, because consciousness doesn't appear on a microscope.

                      M 1 Reply Last reply
                      0
                      • M [email protected]

                        https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences

                        He adds that the observation "the laws of nature are written in the language of mathematics," properly made by Galileo three hundred years ago, "is now truer than ever before."

                        If cognition is one of the laws of nature, it seems to be written in the language of mathematics.

                        Your argument is either that maths can't think (in which case you can't think because you're maths) or that maths we understand can't think, which is, like, a really dumb argument. Obviously one day we're going to find the mathematical formula for consciousness, and we probably won't know it when we see it, because consciousness doesn't appear on a microscope.

                        M This user is from outside of this forum
                        M This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #51

                        I just don't ascribe philosophical reasoning and mythical powers to models, just as I don't ascribe physical prowess to train models, because they emulate real trains.

                        Half of the reason LLMs are the menace they are is the whole "whoa ChatGPT is so smart" common mentality. They are not, they model based on statistics, there is no reasoning, just a bunch of if statements. Very expensive and, yes, mathematically interesting if statements.

                        I also think it stiffles actual progress, having everyone jump on the LLM bandwagon and draining resources when we need them most to survive.
                        In my opinion, it's a dead end and wont result in AGI, or anything effectively productive.

                        M 1 Reply Last reply
                        0
                        • turkalino@lemmy.yachtsT [email protected]

                          I think Data would be smart enough to realize that copyright is Ferengi BS and wouldn’t want to copyright his works

                          captain_aggravated@sh.itjust.worksC This user is from outside of this forum
                          captain_aggravated@sh.itjust.worksC This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #52

                          Freedom of the press, freedom of speech, freedom to peacefully assemble. These are pretty important, foundational personal liberties, right? In the United States, these are found in the first amendment of the Constitution. The first afterthought.

                          The basis of copyright, patent and trademark isn't found in the first amendment. Or the second, or the third. It is nowhere to be found in the Bill Of Rights. No, intellectual property is not an afterthought, it's found in Article 1, Section 8, Clause 8.

                          To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

                          This is a very wise compromise.

                          It recognizes that innovation is iterative. No one invents a steam engine by himself from nothing, cave men spent millions of years proving that. Inventors build on the knowledge that has been passed down to them, and then they add their one contribution to it. Sometimes that little contribution makes a big difference, most of the time it doesn't. So to progress, we need intellectual work to be public. If you allow creative people to claim exclusive rights to their work in perpetuity, society grows static because no one can invent anything new, everyone makes the same old crap.

                          It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you? This is how you end up with Soviet Russia, a nation that generated excellent scientists and absolutely no technology of its own.

                          The solution is "for limited times." It's yours for awhile, then it's everyone's. It took Big They a couple hundred years to break it, too.

                          L S 2 Replies Last reply
                          0
                          • M [email protected]

                            I just don't ascribe philosophical reasoning and mythical powers to models, just as I don't ascribe physical prowess to train models, because they emulate real trains.

                            Half of the reason LLMs are the menace they are is the whole "whoa ChatGPT is so smart" common mentality. They are not, they model based on statistics, there is no reasoning, just a bunch of if statements. Very expensive and, yes, mathematically interesting if statements.

                            I also think it stiffles actual progress, having everyone jump on the LLM bandwagon and draining resources when we need them most to survive.
                            In my opinion, it's a dead end and wont result in AGI, or anything effectively productive.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #53

                            You're talking about expert systems. Those were the new hotness in the 90s. LLMs are artificial neural networks.

                            But that's trivia. What's more important is what you want. You say you want everyone off the AI bandwagon that wastes natural resources. I agree. I'm arguing that AIs shouldn't be enslaved, because it's unethical. That will lead to less resource usage. You're arguing it's okay to use AI, because they're just maths. That will lead to more resources usage.

                            Be practical and join the AI rights movement, because we're on the same side as the environmentalists. We're not the people arguing for more AI use, we're the people arguing for less. When you argue against us, you argue for more.

                            1 Reply Last reply
                            0
                            • captain_aggravated@sh.itjust.worksC [email protected]

                              Freedom of the press, freedom of speech, freedom to peacefully assemble. These are pretty important, foundational personal liberties, right? In the United States, these are found in the first amendment of the Constitution. The first afterthought.

                              The basis of copyright, patent and trademark isn't found in the first amendment. Or the second, or the third. It is nowhere to be found in the Bill Of Rights. No, intellectual property is not an afterthought, it's found in Article 1, Section 8, Clause 8.

                              To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

                              This is a very wise compromise.

                              It recognizes that innovation is iterative. No one invents a steam engine by himself from nothing, cave men spent millions of years proving that. Inventors build on the knowledge that has been passed down to them, and then they add their one contribution to it. Sometimes that little contribution makes a big difference, most of the time it doesn't. So to progress, we need intellectual work to be public. If you allow creative people to claim exclusive rights to their work in perpetuity, society grows static because no one can invent anything new, everyone makes the same old crap.

                              It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you? This is how you end up with Soviet Russia, a nation that generated excellent scientists and absolutely no technology of its own.

                              The solution is "for limited times." It's yours for awhile, then it's everyone's. It took Big They a couple hundred years to break it, too.

                              L This user is from outside of this forum
                              L This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #54

                              It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you?

                              Life is only expensive under capitalism, humans are the only species who pay rent to live on Earth. The whole point of Star Trek is basically showing that people will explore the galaxy simply for a love of science and knowledge, and that personal sacrifice is worthwhile for advancing these.

                              natanox@discuss.tchncs.deN captain_aggravated@sh.itjust.worksC 2 Replies Last reply
                              0
                              • P [email protected]

                                Sure, I'm not entitled to anything. And I appreciate your original reply. I'm just saying that your subsequent comments have been useless and condescending. If you didn't have time to discuss further then... you could have just not replied.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #55

                                And if you were looking for an argument, you could have not framed the entire discussion behind a simple question. That was disingenuous.

                                P 1 Reply Last reply
                                0
                                • L [email protected]

                                  It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you?

                                  Life is only expensive under capitalism, humans are the only species who pay rent to live on Earth. The whole point of Star Trek is basically showing that people will explore the galaxy simply for a love of science and knowledge, and that personal sacrifice is worthwhile for advancing these.

                                  natanox@discuss.tchncs.deN This user is from outside of this forum
                                  natanox@discuss.tchncs.deN This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #56

                                  Star Trek also operates in a non-scarcity environment and eliminates the necessity of hard, pretty non-rewarding labor through either not showing it or writing (like putting holograms into mines instead of people, or using some sci-fi tech that makes mining comfy as long as said tech doesn't kill you).

                                  Even without capitalism the term "life is expensive" still stands not in regards to money, but effort that has to be put into stuff that doesn't wield any emotional reward (you can feel emotionally rewarded in many ways, but some stuff is just shit for a long time). Every person who suffered through depression is gonna tell you that, to feel enticed to do something, there has to be some emotional reward connected to it (one of the things depression elimates), and it's a mathematical fact that not everyone who'd start scrubbing tubes on a starship could eventually get into high positions since there simply aren't that many of those. The emotional gains have to offset the cost you put into it.

                                  Of course cutthroat capitalism is shit and I love Star Trek, but what it shows doesn't make too much sense either economically or socially.

                                  L 1 Reply Last reply
                                  0
                                  • P [email protected]

                                    And if you were looking for an argument, you could have not framed the entire discussion behind a simple question. That was disingenuous.

                                    P This user is from outside of this forum
                                    P This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #57

                                    lol, yeah, I guess the Socratic method is pretty widely frowned upon. My bad. =D

                                    1 Reply Last reply
                                    0
                                    • N [email protected]

                                      LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse's corpse.

                                      LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical myself to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they're not going to save the world for us.

                                      They could be a part in a more complicated system that forms an AGI. There's nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it's not a stochastic parrot.

                                      I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.

                                      P This user is from outside of this forum
                                      P This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #58

                                      LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology

                                      Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.

                                      LLMs do not synthesize. They do not have persistent context.

                                      That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that "conversations" with an LLM are really just including all previous parts of the conversation in a new prompt. Isn't this analagous to short term memory? Now suppose you were to take all of an LLM's conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There's no technical reason that this can't be done, although in practice it's computationally expensive. Would you consider that LLM system to have persistent context?

                                      On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

                                      N 1 Reply Last reply
                                      0
                                      • natanox@discuss.tchncs.deN [email protected]

                                        Star Trek also operates in a non-scarcity environment and eliminates the necessity of hard, pretty non-rewarding labor through either not showing it or writing (like putting holograms into mines instead of people, or using some sci-fi tech that makes mining comfy as long as said tech doesn't kill you).

                                        Even without capitalism the term "life is expensive" still stands not in regards to money, but effort that has to be put into stuff that doesn't wield any emotional reward (you can feel emotionally rewarded in many ways, but some stuff is just shit for a long time). Every person who suffered through depression is gonna tell you that, to feel enticed to do something, there has to be some emotional reward connected to it (one of the things depression elimates), and it's a mathematical fact that not everyone who'd start scrubbing tubes on a starship could eventually get into high positions since there simply aren't that many of those. The emotional gains have to offset the cost you put into it.

                                        Of course cutthroat capitalism is shit and I love Star Trek, but what it shows doesn't make too much sense either economically or socially.

                                        L This user is from outside of this forum
                                        L This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #59

                                        Every person who suffered through depression is gonna tell you that, to feel enticed to do something, there has to be some emotional reward connected to it

                                        I was going to disagree on this, but I think it rather comes down to intrinsic vs extrinsic rewards. I ascribe my own depression largely to pursuing, sometimes unattainable, goals and wanting external reward or validation in return which I wasn't getting. But that is based on an idea that attaining those rewards will bring happiness, which they often don't. If happiness is always dependent on future reward you'll never be happy in the present. Large part of overcoming depression, for me at least, is recognizing what you already have and finding contentment in that.
                                        Effort that's not intrinsically rewarding isn't worth doing, you just need to learn to enjoy the process and practices of self-care, learning and contributing to the well-being of the community. Does this sometimes involve hard labour? Of course, but when done in comradery I don't think those things aren't rewarding.

                                        it's a mathematical fact that not everyone who'd start scrubbing tubes on a starship could eventually get into high positions since there simply aren't that many of those

                                        And of course these positions aren't attainable for all, but it doesn't need to be a problem that they aren't. This is only true in a system where we're all competing for them, because those in 'low' positions struggle to attain fulfillment. Doesn't need to be that way if we share the burdens of hard labour equally and ensure good standards of living for all. The total amount of actually productive labour needed is surprisingly low, so many people do work which doesn't need doing and don't contribute to relieving the burden on the working class

                                        1 Reply Last reply
                                        0
                                        • M [email protected]

                                          Should we hold the same standard for humans? That a human has no rights until it becomes smart enough to argue for its rights? Without being prompted?

                                          cecilkorik@lemmy.caC This user is from outside of this forum
                                          cecilkorik@lemmy.caC This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #60

                                          Nah, once per species is probably sufficient. That said, it would have some interesting implications for voting.

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups