Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

    N This user is from outside of this forum
    N This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #201

    By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.

    Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

    1 Reply Last reply
    1
    • H [email protected]

      I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

      N This user is from outside of this forum
      N This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #202

      So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?

      H 1 Reply Last reply
      0
      • H [email protected]

        Philosophers are so desperate for humans to be special.
        How is outputting things based on things it has learned any different to what humans do?

        We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.

        I don't think we have achieved "AGI" but I do think this argument is stupid.

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #203

        Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

        1 Reply Last reply
        1
        • S [email protected]

          Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

          Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.

          I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

          A This user is from outside of this forum
          A This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #204

          Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

          Yep, that's a bingo!

          Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

          There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.

          If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

          S 1 Reply Last reply
          0
          • K [email protected]

            It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

            It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

            N This user is from outside of this forum
            N This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #205
            This post did not contain any content.
            1 Reply Last reply
            0
            • K [email protected]

              It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

              It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

              A This user is from outside of this forum
              A This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #206

              If you can bear the cringe of the interviewer, there's a good interview with Penrose that goes on the same direction: https://m.youtube.com/watch?v=e9484gNpFF8

              1 Reply Last reply
              2
              • T [email protected]

                We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                https://archive.ph/Fapar

                G This user is from outside of this forum
                G This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #207

                Anyone pretending AI has intelligence is a fucking idiot.

                P M A J 4 Replies Last reply
                32
                • A [email protected]

                  Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

                  Yep, that's a bingo!

                  Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                  There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.

                  If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                  S This user is from outside of this forum
                  S This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #208

                  Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                  Show your proof, then. I've already said what I need to say about this topic.

                  If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.

                  We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.

                  Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.

                  It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                  This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.

                  A 1 Reply Last reply
                  0
                  • K [email protected]

                    It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

                    It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #209

                    I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

                    J K 2 Replies Last reply
                    2
                    • T [email protected]

                      We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                      But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                      This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                      So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                      Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                      Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                      https://archive.ph/Fapar

                      P This user is from outside of this forum
                      P This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #210

                      I know it doesn't mean it's not dangerous, but this article made me feel better.

                      M 1 Reply Last reply
                      0
                      • mr_satan@lemmy.zipM [email protected]

                        My language doesn't really have hyphenated words or different dashes. It's mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

                        sternhammer@aussie.zoneS This user is from outside of this forum
                        sternhammer@aussie.zoneS This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #211

                        Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

                        I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

                        mr_satan@lemmy.zipM 1 Reply Last reply
                        0
                        • S [email protected]

                          I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

                          J This user is from outside of this forum
                          J This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #212

                          possibly.

                          current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

                          simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

                          this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

                          that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

                          1 Reply Last reply
                          1
                          • J [email protected]

                            So you trust your slm more than your fellow humans?

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #213

                            Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

                            J 1 Reply Last reply
                            0
                            • G [email protected]

                              Anyone pretending AI has intelligence is a fucking idiot.

                              P This user is from outside of this forum
                              P This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #214

                              You could say they're AS (Actual Stupidity)

                              P 1 Reply Last reply
                              3
                              • S [email protected]

                                Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                                Show your proof, then. I've already said what I need to say about this topic.

                                If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.

                                We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.

                                Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.

                                It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                                This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.

                                A This user is from outside of this forum
                                A This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #215

                                All I’ve said was that humans thoughts are also probabilistic based on the little we know of them.

                                Much of the universe can be modeled as probabilities. So what? I can model a lot of things as different things. That does not mean that the model is the thing itself. Scientists are still doing what scientists do: being skeptical and making and testing hypotheses. It was difficult to prove definitively that smoking causes cancer yet you're willing to hop to "human thought is just an advanced chatbot" on scant evidence.

                                This is just more of you projecting your insecurity onto me and accusing me of doing things you fear.

                                No, it's again a case of you buying the bullshit arguments of tech bros. Even if we had a machine capable of replicating human thought, humans are more than walking brain stems.

                                You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?

                                At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.

                                Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won't be the organic version of that robot. We'll still be human.

                                I think you're beyond having to touch grass. You need to take a fucking humanities course.

                                S 1 Reply Last reply
                                0
                                • P [email protected]

                                  You could say they're AS (Actual Stupidity)

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #216

                                  Autonomous Systems that are Actually Stupid lol

                                  1 Reply Last reply
                                  1
                                  • S [email protected]

                                    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

                                    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

                                    P This user is from outside of this forum
                                    P This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #217

                                    Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

                                    T S 2 Replies Last reply
                                    0
                                    • P [email protected]

                                      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

                                      T This user is from outside of this forum
                                      T This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #218

                                      The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as "artificial neurons" in the context of neural networks). While ChatGPT has more "neurons" in this sense, it's important to note that these are not the same as biological neurons, and the comparison is not straightforward.

                                      86 billion neurons in the human brain isn't that much compared to some of the larger 1.7 trillion neuron neural networks though.

                                      P A M 3 Replies Last reply
                                      2
                                      • S [email protected]

                                        My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

                                        It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #219

                                        Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

                                        M W S 3 Replies Last reply
                                        3
                                        • T [email protected]

                                          The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as "artificial neurons" in the context of neural networks). While ChatGPT has more "neurons" in this sense, it's important to note that these are not the same as biological neurons, and the comparison is not straightforward.

                                          86 billion neurons in the human brain isn't that much compared to some of the larger 1.7 trillion neuron neural networks though.

                                          P This user is from outside of this forum
                                          P This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #220

                                          Keep thinking the human brain is as stupid as AI hahaaha

                                          jumping_redditor@sh.itjust.worksJ 1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups