Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. agi graph slop, wtf does goverment collapse have to do with ai?

agi graph slop, wtf does goverment collapse have to do with ai?

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
41 Posts 16 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A [email protected]
    This post did not contain any content.
    R This user is from outside of this forum
    R This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #3

    The book Scythe had a good portrayal of a scentient ai and its reasons for taking over the government. It's just backstory so i don't think it's spoilers, still gunna tag it.

    ::: spoiler spoiler
    The Thunderhead ai was created to help humans and make them content. It realized pretty quickly governments ran counter to that idea. So it got rid of all of them. Now it's a utopia. Actual utopia or as close as you can get most are content and live their lives enjoying them. The massive problems with the system are due to humans not the Thunderhead.
    :::

    C 1 Reply Last reply
    1
    • U [email protected]

      If someone actually managed to create AGI that is low compute, scalable and out of government control, then governments wouldnt exist for very long. Its just that AGI is not gonna happen for a long while.

      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #4

      i think we are safe till atleast early 2030's

      E 1 Reply Last reply
      0
      • A [email protected]
        This post did not contain any content.
        regalpotoo@lemmy.worldR This user is from outside of this forum
        regalpotoo@lemmy.worldR This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #5

        People keep imagining AGI like its going to be benevolent skynet, when it's probably going to be more like the Tyrell corporation from Blade Runner

        D 1 Reply Last reply
        3
        • U [email protected]

          If someone actually managed to create AGI that is low compute, scalable and out of government control, then governments wouldnt exist for very long. Its just that AGI is not gonna happen for a long while.

          Y This user is from outside of this forum
          Y This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #6

          The only way to create AGI is by accident. I can’t adequately stress how much we haven’t the first clue how consciousness works (appropriately called The Hard Problem). I don’t mean we’re far, I mean we don’t even have a working theory — just half a dozen untestable (if fascinating) hypotheses. Hell, we can’t even agree on whether insects have emotions (probably not?) let alone explain subjective experience.

          communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
          3
          • A [email protected]

            i think we are safe till atleast early 2030's

            E This user is from outside of this forum
            E This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #7

            Will life be enjoyable until then or are a majority of us unemployed early on.

            1 Reply Last reply
            0
            • regalpotoo@lemmy.worldR [email protected]

              People keep imagining AGI like its going to be benevolent skynet, when it's probably going to be more like the Tyrell corporation from Blade Runner

              D This user is from outside of this forum
              D This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #8

              I hope we get flying cars from blade runner too

              W 1 Reply Last reply
              1
              • U [email protected]

                If someone actually managed to create AGI that is low compute, scalable and out of government control, then governments wouldnt exist for very long. Its just that AGI is not gonna happen for a long while.

                0 This user is from outside of this forum
                0 This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #9

                How would this be any different from more people existing in the world? These AGIs still need to eat (err, consume electricity). Or are you assuming they'll be superior intelligences and thus disruptive?

                U 1 Reply Last reply
                0
                • 0 [email protected]

                  How would this be any different from more people existing in the world? These AGIs still need to eat (err, consume electricity). Or are you assuming they'll be superior intelligences and thus disruptive?

                  U This user is from outside of this forum
                  U This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #10

                  Have you ever observed 100 people in a room trying to decide on one thing? The idea with AGI is that it doesnt have that problem and also that you can scale it to billions or trillions of independent or cooperative units.

                  1 Reply Last reply
                  0
                  • U [email protected]

                    If someone actually managed to create AGI that is low compute, scalable and out of government control, then governments wouldnt exist for very long. Its just that AGI is not gonna happen for a long while.

                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #11

                    Tbh would happen with high compute agi as it could then create low compute tendrils.

                    1 Reply Last reply
                    1
                    • Y [email protected]

                      The only way to create AGI is by accident. I can’t adequately stress how much we haven’t the first clue how consciousness works (appropriately called The Hard Problem). I don’t mean we’re far, I mean we don’t even have a working theory — just half a dozen untestable (if fascinating) hypotheses. Hell, we can’t even agree on whether insects have emotions (probably not?) let alone explain subjective experience.

                      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #12

                      Consciousness is entirely overrated, it doesn't mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.

                      Y 1 Reply Last reply
                      2
                      • communist@lemmy.frozeninferno.xyzC [email protected]

                        Consciousness is entirely overrated, it doesn't mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.

                        Y This user is from outside of this forum
                        Y This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #13

                        Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

                        communist@lemmy.frozeninferno.xyzC P 2 Replies Last reply
                        1
                        • A [email protected]
                          This post did not contain any content.
                          V This user is from outside of this forum
                          V This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #14

                          Escapes where ? There is nowhere to go. There are fucking people everywhere.

                          rickyrigatoni@lemm.eeR L 2 Replies Last reply
                          2
                          • Y [email protected]

                            Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

                            communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                            communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                            [email protected]
                            wrote on last edited by [email protected]
                            #15

                            A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn't meaningfully different.

                            Y 1 Reply Last reply
                            0
                            • communist@lemmy.frozeninferno.xyzC [email protected]

                              A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn't meaningfully different.

                              Y This user is from outside of this forum
                              Y This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #16

                              That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

                              Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.

                              communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                              0
                              • Y [email protected]

                                Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #17

                                Reasoning is approximated enough with matrix math and filter algorithms.

                                It can fly drones, dodge wrenches.

                                The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.

                                Y 1 Reply Last reply
                                1
                                • P [email protected]

                                  Reasoning is approximated enough with matrix math and filter algorithms.

                                  It can fly drones, dodge wrenches.

                                  The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.

                                  Y This user is from outside of this forum
                                  Y This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #18

                                  Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

                                  communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                                  0
                                  • V [email protected]

                                    Escapes where ? There is nowhere to go. There are fucking people everywhere.

                                    rickyrigatoni@lemm.eeR This user is from outside of this forum
                                    rickyrigatoni@lemm.eeR This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #19

                                    Fucking everywhere legal in 2027? Maybe the future isn't so dark after all.

                                    1 Reply Last reply
                                    0
                                    • Y [email protected]

                                      That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

                                      Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.

                                      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                                      communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by [email protected]
                                      #20

                                      If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go "ah but it's not really reasoning."

                                      what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say "but who would want yet another machine that just does what we say?"

                                      your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?

                                      Y 1 Reply Last reply
                                      0
                                      • communist@lemmy.frozeninferno.xyzC [email protected]

                                        If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go "ah but it's not really reasoning."

                                        what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say "but who would want yet another machine that just does what we say?"

                                        your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?

                                        Y This user is from outside of this forum
                                        Y This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by [email protected]
                                        #21

                                        A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

                                        The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

                                        What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

                                        In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

                                        Hope that helps!

                                        communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                                        0
                                        • Y [email protected]

                                          A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

                                          The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

                                          What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

                                          In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

                                          Hope that helps!

                                          communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                                          communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by [email protected]
                                          #22

                                          If there's no way to tell the illusion from reality, tell me why it matters functionally at all.

                                          what difference does true thought make from the illusion?

                                          also agi means something that can do all economically important labor, it has nothing to do with what you said and that's not a common definition.

                                          Y 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups