Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J [email protected]

    What I never understood about this argument is.....why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us.....why all of this isn't enough to fit the very self-explanatory term "artificial....intelligence". That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn't even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it's artificial. We've had AI in games for decades, it's not the sci-fi AI, but it's still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don't know when they're lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they're saying came from or that it's even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word "anthropomorphize" being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don't know if consciousness isn't just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don't really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we're so much better than other beings, to the point where we decide whether their existence is even recognizable?

    L This user is from outside of this forum
    L This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #253

    You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.

    Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

    They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.

    Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

    But again like Meeseeks, they disappear and context window closes.

    Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

    It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.

    A J 2 Replies Last reply
    0
    • J [email protected]

      What I never understood about this argument is.....why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us.....why all of this isn't enough to fit the very self-explanatory term "artificial....intelligence". That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn't even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it's artificial. We've had AI in games for decades, it's not the sci-fi AI, but it's still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don't know when they're lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they're saying came from or that it's even a factoid, why is it so crazy when the machine does it?

      I keep hearing the word "anthropomorphize" being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don't know if consciousness isn't just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don't really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we're so much better than other beings, to the point where we decide whether their existence is even recognizable?

      S This user is from outside of this forum
      S This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #254

      I think your argument is a bit besides the point.

      The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

      But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

      For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

      The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

      All that is done in a separate training step, where essentially a new LLM is generated.

      J 1 Reply Last reply
      6
      • F [email protected]

        Actually it's a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you're interested in the topic, you could go read about them.

        K This user is from outside of this forum
        K This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #255

        I'm not attacking philosophical arguments between the 1950s and the 1980s.

        I'm pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.

        1 Reply Last reply
        2
        • T [email protected]

          We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

          But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

          This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

          So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

          Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

          Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

          https://archive.ph/Fapar

          D This user is from outside of this forum
          D This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #256

          Humans are also LLMs.

          We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.

          What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.

          S A J 3 Replies Last reply
          4
          • T [email protected]

            We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

            But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

            This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

            So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

            Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

            Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

            https://archive.ph/Fapar

            K This user is from outside of this forum
            K This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #257

            Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.

            R A 2 Replies Last reply
            3
            • A [email protected]

              What is "actual intelligence" then?

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #258

              I have no idea. For me it's a "you recognize it when you see it" kinda thing. Normally I'm in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it's a very difficult question to define something based around generalizing and reacting to new challenges.

              But whether LLMs do have "actual intelligence" or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn't up for debate. Which is misleading, especially to the general public.

              1 Reply Last reply
              0
              • D [email protected]

                Humans are also LLMs.

                We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.

                What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.

                S This user is from outside of this forum
                S This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #259

                No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.

                R 1 Reply Last reply
                6
                • H [email protected]

                  It's called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can't write for beans.

                  W This user is from outside of this forum
                  W This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #260

                  Do you think it's a matter of choosing a complexity to care about?

                  H 1 Reply Last reply
                  0
                  • T [email protected]

                    To be fair, the term "AI" has always been used in an extremely vague way.

                    NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we've been referring to those as "AI" for decades without anybody taking an issue with it.

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #261

                    I've heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it's ML, and if you can't then it's AI.

                    1 Reply Last reply
                    0
                    • S [email protected]

                      The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.

                      H This user is from outside of this forum
                      H This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #262

                      "than"...

                      IF THEN

                      MORE THAN

                      1 Reply Last reply
                      1
                      • E [email protected]

                        I'd agree with you if I saw "hi's" and "her's" in the wild, but nope. I still haven't seen someone write "that car is her's".

                        H This user is from outside of this forum
                        H This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #263

                        Keep reading...

                        1 Reply Last reply
                        0
                        • W [email protected]

                          Do you think it's a matter of choosing a complexity to care about?

                          H This user is from outside of this forum
                          H This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #264

                          If you can formulate that sentence, you can handle "it's means it is". Come on. Or "common" if you prefer.

                          W 1 Reply Last reply
                          0
                          • mrscottytay@sh.itjust.worksM [email protected]

                            Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.

                            Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

                            Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.

                            I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.

                            H This user is from outside of this forum
                            H This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #265

                            Standard English has such a long list of weird and contradictory roles

                            rules.

                            1 Reply Last reply
                            0
                            • S [email protected]

                              I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

                              K This user is from outside of this forum
                              K This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #266

                              I believe what you say. I don't believe that is what the article is saying.

                              1 Reply Last reply
                              0
                              • C [email protected]

                                much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #267

                                Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

                                M 1 Reply Last reply
                                1
                                • K [email protected]

                                  Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.

                                  R This user is from outside of this forum
                                  R This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #268

                                  No, thats the point of the article. You also haven't really said much at all.

                                  K 1 Reply Last reply
                                  3
                                  • S [email protected]

                                    No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.

                                    R This user is from outside of this forum
                                    R This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #269

                                    Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!

                                    1 Reply Last reply
                                    2
                                    • A [email protected]

                                      You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.

                                      R This user is from outside of this forum
                                      R This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #270

                                      Clearly intelligent people mispell and have horrible grammar too.

                                      1 Reply Last reply
                                      2
                                      • K [email protected]

                                        Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #271

                                        No the current branch of AI is very unlikely to result in artificial intelligence.

                                        1 Reply Last reply
                                        10
                                        • D [email protected]

                                          Humans are also LLMs.

                                          We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.

                                          What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.

                                          A This user is from outside of this forum
                                          A This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #272

                                          This is so over simplified.

                                          1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups