Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. agi graph slop, wtf does goverment collapse have to do with ai?

agi graph slop, wtf does goverment collapse have to do with ai?

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
41 Posts 16 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R [email protected]

    The book Scythe had a good portrayal of a scentient ai and its reasons for taking over the government. It's just backstory so i don't think it's spoilers, still gunna tag it.

    ::: spoiler spoiler
    The Thunderhead ai was created to help humans and make them content. It realized pretty quickly governments ran counter to that idea. So it got rid of all of them. Now it's a utopia. Actual utopia or as close as you can get most are content and live their lives enjoying them. The massive problems with the system are due to humans not the Thunderhead.
    :::

    C This user is from outside of this forum
    C This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #29

    Lots of science fiction does. I read Metamorphosis of Prime Intellect (full text legally available online) and collapse of governments was a natural consequence of an all-powerful AI... although that was only possible because of fictional physics, giving you a much-needed reality check.

    1 Reply Last reply
    0
    • Y [email protected]

      Matter to whom?

      We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).

      Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”

      Whether we can build an AGI is just a curious question, whose answer for now is No.

      P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

      That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #30

      Most people can’t identify a correct mathematical equation from an incorrect one

      this is irrelevant, we're talking about something where nobody can tell the difference, not where it's difficult.

      What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

      it means a job. That's obviously not a job and obviously not what is meant, an interesting strategy from one who just used "what most people mean when they say"

      That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

      it just has to be at least as good as a human at manipulating the world to achieve its goals, I don't know of any other definition of agi that factors in actually meaningful tasks

      an agi should be able to do almost any task a human can do at a computer. It doesn't have to be conscious and I have no idea why or where consciousness factors into the equation.

      Y 2 Replies Last reply
      0
      • Y [email protected]

        The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.

        communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
        communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #31

        You've arbitrarily defined an agi by its consciousness instead of its capabilities.

        Y 1 Reply Last reply
        0
        • communist@lemmy.frozeninferno.xyzC [email protected]

          You've arbitrarily defined an agi by its consciousness instead of its capabilities.

          Y This user is from outside of this forum
          Y This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #32

          Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

          For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem.

          To quote Gödel himself: “We cannot mechanize all of our intuitions.”

          Alan Turing drew the same conclusion a few years later with The Halting Problem.

          In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

          communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
          0
          • communist@lemmy.frozeninferno.xyzC [email protected]

            Most people can’t identify a correct mathematical equation from an incorrect one

            this is irrelevant, we're talking about something where nobody can tell the difference, not where it's difficult.

            What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

            it means a job. That's obviously not a job and obviously not what is meant, an interesting strategy from one who just used "what most people mean when they say"

            That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

            it just has to be at least as good as a human at manipulating the world to achieve its goals, I don't know of any other definition of agi that factors in actually meaningful tasks

            an agi should be able to do almost any task a human can do at a computer. It doesn't have to be conscious and I have no idea why or where consciousness factors into the equation.

            Y This user is from outside of this forum
            Y This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #33

            Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.

            You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.

            Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

            Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

            communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
            0
            • communist@lemmy.frozeninferno.xyzC [email protected]

              Most people can’t identify a correct mathematical equation from an incorrect one

              this is irrelevant, we're talking about something where nobody can tell the difference, not where it's difficult.

              What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

              it means a job. That's obviously not a job and obviously not what is meant, an interesting strategy from one who just used "what most people mean when they say"

              That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

              it just has to be at least as good as a human at manipulating the world to achieve its goals, I don't know of any other definition of agi that factors in actually meaningful tasks

              an agi should be able to do almost any task a human can do at a computer. It doesn't have to be conscious and I have no idea why or where consciousness factors into the equation.

              Y This user is from outside of this forum
              Y This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #34

              we're talking about something where nobody can tell the difference, not where it's difficult.

              You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?

              Seriously though, I’m out.

              communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
              0
              • Y [email protected]

                we're talking about something where nobody can tell the difference, not where it's difficult.

                You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?

                Seriously though, I’m out.

                communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                [email protected]
                wrote on last edited by [email protected]
                #35

                The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.

                if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

                Y 1 Reply Last reply
                0
                • Y [email protected]

                  Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.

                  You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.

                  Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

                  Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

                  communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                  communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #36

                  A job is a task one human wants another to accomplish, it is not arbitrary at all.

                  philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

                  i don't see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn't do these things.

                  Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

                  yes but you can give it large, vague goals like "empower humanity, do what we say and minimize harm." And it will still do them. So what does it matter?

                  Y 1 Reply Last reply
                  0
                  • Y [email protected]

                    Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

                    For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem.

                    To quote Gödel himself: “We cannot mechanize all of our intuitions.”

                    Alan Turing drew the same conclusion a few years later with The Halting Problem.

                    In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #37

                    Jobs are not arbitrary, they're tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

                    For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem

                    Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

                    In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

                    humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.

                    Y 1 Reply Last reply
                    0
                    • communist@lemmy.frozeninferno.xyzC [email protected]

                      Jobs are not arbitrary, they're tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

                      For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem

                      Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

                      In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

                      humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.

                      Y This user is from outside of this forum
                      Y This user is from outside of this forum
                      [email protected]
                      wrote on last edited by [email protected]
                      #38

                      Feed it the entire internet and let it figure out what humans value

                      There are theorems in mathematical logic that tell us this is literally impossible. Also common sense.

                      And LLMs are notoriously stupid. Why would you offer them as an example?

                      I keep coming back to this: what we were discussing in this thread is the creation of an actual mind, not a zombie illusion. You’re welcome to make your half-assed malfunctional zombie LLM machine to do menial or tedious uncreative statistical tasks. I’m not against it. That’s just not what interests me.

                      Sooner or later humans will create real artificial minds. Right now, though, we don’t know how to do that. Oh well.

                      https://introtcs.org/public/index.html

                      1 Reply Last reply
                      0
                      • communist@lemmy.frozeninferno.xyzC [email protected]

                        The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.

                        if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

                        Y This user is from outside of this forum
                        Y This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #39

                        if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

                        These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.

                        There are, in fact, very few interesting or important things that a non-thinking entity can do. It can make toast. It can do calculations. It can design highways. It can cure cancer. It can probably fold clothes. None of this shit is particularly exciting. Just more machines doing what they’re told. We want a machine that can tell us what to do, instead. That’s AGI. We don’t know how to build such a machine, at least given our current understanding of mathematical logic, theoretical computer science, and human cognition.

                        1 Reply Last reply
                        0
                        • communist@lemmy.frozeninferno.xyzC [email protected]

                          A job is a task one human wants another to accomplish, it is not arbitrary at all.

                          philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

                          i don't see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn't do these things.

                          Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

                          yes but you can give it large, vague goals like "empower humanity, do what we say and minimize harm." And it will still do them. So what does it matter?

                          Y This user is from outside of this forum
                          Y This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #40

                          Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

                          When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics? How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

                          1 Reply Last reply
                          0
                          • A [email protected]
                            This post did not contain any content.
                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #41

                            I watched the YouTube video of this, don't know if they left this out of the written report but they said nothing about individuals running AI on their own hardware or cloud services. It was only about OpenAI and DeepSeek

                            1 Reply Last reply
                            0
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups