Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A [email protected]

    How is outputting things based on things it has learned any different to what humans do?

    Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn't reason well or often.

    If you were born during the first industrial revolution, then you'd think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

    C This user is from outside of this forum
    C This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #61

    Do you think most people reason well?

    The answer is why AI is so convincing.

    A 1 Reply Last reply
    0
    • I [email protected]

      Good luck. Even David Attenborrough can't help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots.
      The play by AI companies is that it's human nature for us to want to give just about every damn thing human qualities.
      I'd explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #62

      David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol

      1 Reply Last reply
      3
      • T [email protected]

        We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

        But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

        This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

        So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

        Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

        Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

        https://archive.ph/Fapar

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #63

        In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

        H 1 Reply Last reply
        8
        • M [email protected]

          In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

          H This user is from outside of this forum
          H This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #64

          It's means "it is".

          W M 2 Replies Last reply
          6
          • E [email protected]

            I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

            I This user is from outside of this forum
            I This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #65

            Hardly surprising human brains are also extremely lossy. Way more lossy than AI. If we want to keep up our manifest exceptionalism, we'd better start definning narrower version of intelligence that isn't going to soon have. Embodied intelligence, is NOT one of those.

            1 Reply Last reply
            1
            • P [email protected]

              And yet, paradoxically, it is far more intelligent than those people who think it is intelligent.

              I This user is from outside of this forum
              I This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #66

              It's more intelligent than most people, we just have to raise the bar on what intelligence is and it will never be intelligent.

              Fortunately, as long as we keep a fuzzy concept like intelligence as the yardstick of our exceptionalism, we will remain exceptionnal forever.

              1 Reply Last reply
              2
              • T [email protected]

                We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                https://archive.ph/Fapar

                S This user is from outside of this forum
                S This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #67

                I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
                https://ai-2027.com/

                H S 2 Replies Last reply
                9
                • E [email protected]

                  I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

                  confuser@lemmy.zipC This user is from outside of this forum
                  confuser@lemmy.zipC This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #68

                  Lol woops I guess autocorrect got me with the compassion

                  1 Reply Last reply
                  2
                  • H [email protected]

                    It's means "it is".

                    W This user is from outside of this forum
                    W This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #69

                    Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

                    I wonder how different it'll be in 500 years.

                    D S H E 4 Replies Last reply
                    1
                    • T [email protected]

                      We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                      But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                      This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                      So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                      Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                      Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                      https://archive.ph/Fapar

                      B This user is from outside of this forum
                      B This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #70

                      This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

                      S jackbydev@programming.devJ 2 Replies Last reply
                      12
                      • W [email protected]

                        Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

                        I wonder how different it'll be in 500 years.

                        D This user is from outside of this forum
                        D This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #71

                        It’s “its”, not “it’s”, unless you mean “it is”, in which case it is “it’s “.

                        1 Reply Last reply
                        3
                        • B [email protected]

                          This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #72

                          And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

                          B 1 Reply Last reply
                          10
                          • H [email protected]

                            It's means "it is".

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #73

                            My auto correct doesn't care.

                            H J 2 Replies Last reply
                            3
                            • P [email protected]

                              reminds me of Mass Effect's VI, "virtual intelligence": a system that's specifically designed to be not truly intelligent, as AI systems are banned throughout the galaxy for its potential to go rogue.

                              R This user is from outside of this forum
                              R This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #74

                              Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent

                              1 Reply Last reply
                              5
                              • T [email protected]

                                We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                https://archive.ph/Fapar

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #75

                                Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

                                L 1 Reply Last reply
                                9
                                • P [email protected]

                                  So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

                                  There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

                                  Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

                                  It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

                                  endmaker@ani.socialE This user is from outside of this forum
                                  endmaker@ani.socialE This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #76

                                  you and I are kindred spirits

                                  1 Reply Last reply
                                  0
                                  • S [email protected]

                                    I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
                                    https://ai-2027.com/

                                    H This user is from outside of this forum
                                    H This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #77

                                    Ask AI:
                                    Did you mean: irresponsible
                                    AI Overview
                                    The term "unresponsible" is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

                                    1 Reply Last reply
                                    2
                                    • F [email protected]

                                      When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

                                      A This user is from outside of this forum
                                      A This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #78

                                      I wasn't, and that wasn't my process at all. Go touch grass.

                                      S F 2 Replies Last reply
                                      2
                                      • S [email protected]

                                        I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
                                        https://ai-2027.com/

                                        S This user is from outside of this forum
                                        S This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #79

                                        Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

                                        This argument seems weak to me:

                                        So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                        You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

                                        I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

                                        I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

                                        H 1 Reply Last reply
                                        2
                                        • P [email protected]

                                          So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

                                          There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

                                          Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

                                          It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #80

                                          That was my first though too. But the author is:

                                          Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

                                          B 1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups