Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K [email protected]

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #195

    philosopher

    Here's why. It's a quote from a pure academic attempting to describe something practical.

    K 1 Reply Last reply
    0
    • B [email protected]

      philosopher

      Here's why. It's a quote from a pure academic attempting to describe something practical.

      K This user is from outside of this forum
      K This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #196

      The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn't do.

      Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

      1 Reply Last reply
      4
      • E [email protected]

        I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.

        A general AI does not need to be conscious.

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #197

        That and there is literally no way to prove something is or isn't conscious. I can't even prove to another human being that I'm a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

        Not saying I consider AI in it's current form to be conscious, more so the whole idea is just silly and unfalsifiable.

        A 1 Reply Last reply
        1
        • I [email protected]

          So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #198

          can say whatever the fuck we want. This isn't any kind of real issue. Think about it. If you went the rest of your life calling LLM's turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I'm so done with their crap

          1 Reply Last reply
          1
          • A [email protected]

            Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

            In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.

            And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #199

            Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

            Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.

            I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

            A 1 Reply Last reply
            0
            • T [email protected]

              We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

              But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

              This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

              So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

              Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

              Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

              https://archive.ph/Fapar

              S This user is from outside of this forum
              S This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #200

              My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

              It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

              P A M fishos@lemmy.worldF S 7 Replies Last reply
              23
              • F [email protected]

                When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

                N This user is from outside of this forum
                N This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #201

                By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.

                Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

                1 Reply Last reply
                1
                • H [email protected]

                  I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

                  N This user is from outside of this forum
                  N This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #202

                  So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?

                  H 1 Reply Last reply
                  0
                  • H [email protected]

                    Philosophers are so desperate for humans to be special.
                    How is outputting things based on things it has learned any different to what humans do?

                    We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.

                    I don't think we have achieved "AGI" but I do think this argument is stupid.

                    N This user is from outside of this forum
                    N This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #203

                    Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

                    1 Reply Last reply
                    1
                    • S [email protected]

                      Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

                      Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.

                      I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote on last edited by [email protected]
                      #204

                      Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

                      Yep, that's a bingo!

                      Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                      There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.

                      If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                      S 1 Reply Last reply
                      0
                      • K [email protected]

                        It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

                        It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

                        N This user is from outside of this forum
                        N This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #205
                        This post did not contain any content.
                        1 Reply Last reply
                        0
                        • K [email protected]

                          It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

                          It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

                          A This user is from outside of this forum
                          A This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #206

                          If you can bear the cringe of the interviewer, there's a good interview with Penrose that goes on the same direction: https://m.youtube.com/watch?v=e9484gNpFF8

                          1 Reply Last reply
                          2
                          • T [email protected]

                            We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                            But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                            This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                            So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                            Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                            Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                            https://archive.ph/Fapar

                            G This user is from outside of this forum
                            G This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #207

                            Anyone pretending AI has intelligence is a fucking idiot.

                            P M A J 4 Replies Last reply
                            32
                            • A [email protected]

                              Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

                              Yep, that's a bingo!

                              Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                              There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.

                              If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #208

                              Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

                              Show your proof, then. I've already said what I need to say about this topic.

                              If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.

                              We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.

                              Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.

                              It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

                              This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.

                              A 1 Reply Last reply
                              0
                              • K [email protected]

                                It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

                                It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

                                S This user is from outside of this forum
                                S This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #209

                                I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

                                J K 2 Replies Last reply
                                2
                                • T [email protected]

                                  We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                  But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                  This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                  So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                  Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                  Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                  https://archive.ph/Fapar

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #210

                                  I know it doesn't mean it's not dangerous, but this article made me feel better.

                                  M 1 Reply Last reply
                                  0
                                  • mr_satan@lemmy.zipM [email protected]

                                    My language doesn't really have hyphenated words or different dashes. It's mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

                                    sternhammer@aussie.zoneS This user is from outside of this forum
                                    sternhammer@aussie.zoneS This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #211

                                    Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

                                    I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

                                    mr_satan@lemmy.zipM 1 Reply Last reply
                                    0
                                    • S [email protected]

                                      I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

                                      J This user is from outside of this forum
                                      J This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #212

                                      possibly.

                                      current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

                                      simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

                                      this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

                                      that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

                                      1 Reply Last reply
                                      1
                                      • J [email protected]

                                        So you trust your slm more than your fellow humans?

                                        M This user is from outside of this forum
                                        M This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #213

                                        Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

                                        J 1 Reply Last reply
                                        0
                                        • G [email protected]

                                          Anyone pretending AI has intelligence is a fucking idiot.

                                          P This user is from outside of this forum
                                          P This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #214

                                          You could say they're AS (Actual Stupidity)

                                          P 1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups