Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E [email protected]

    What language is this?

    mr_satan@lemmy.zipM This user is from outside of this forum
    mr_satan@lemmy.zipM This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #237

    Lithuanian. We do have composite words, but we use vowels, if necessary, as connecting sounds. Otherwise dashes usually signify either dialog or explanations in a sentence (there's more nuance, of course).

    1 Reply Last reply
    0
    • sternhammer@aussie.zoneS [email protected]

      Sounds wonderful. I recently had my writing—which is liberally sprinkled with em-dashes—edited to add spaces to conform to the house style and this made me sad.

      I also feel sad that I failed to (ironically) mention the under-appreciated semicolon; punctuation that is not as adamant as a full stop but more assertive than a comma. I should use it more often.

      mr_satan@lemmy.zipM This user is from outside of this forum
      mr_satan@lemmy.zipM This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #238

      I rarely find good use for a semicolon sadly.

      1 Reply Last reply
      0
      • S [email protected]

        My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

        It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #239

        Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there's likely a huge difference in how human brains and LLMs work.

        S 1 Reply Last reply
        1
        • T [email protected]

          To be fair, the term "AI" has always been used in an extremely vague way.

          NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we've been referring to those as "AI" for decades without anybody taking an issue with it.

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #240

          It's true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

          A 1 Reply Last reply
          1
          • T [email protected]

            However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy.
            Also, AI will be fine with well defined task where innovation isn't a requirement. As it is today, AI is incapable to innovate.

            C This user is from outside of this forum
            C This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #241

            much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

            P A T 3 Replies Last reply
            2
            • A [email protected]

              If you don't think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?

              Also, you're the one with the burden of proof in this exchange. It's a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.

              You may not have new ideas, or be creative. So maybe you're a chatbot with organs, but people who aren't do exist.

              H This user is from outside of this forum
              H This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #242

              Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.

              All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it

              A 1 Reply Last reply
              0
              • N [email protected]

                So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?

                H This user is from outside of this forum
                H This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #243

                No... There are a lot of radio shows that get scientists to speak.

                N 1 Reply Last reply
                0
                • T [email protected]

                  We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                  But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                  This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                  So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                  Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                  Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                  https://archive.ph/Fapar

                  F This user is from outside of this forum
                  F This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #244

                  No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

                  “AI” is a tool to be used, nothing more.

                  T 1 Reply Last reply
                  5
                  • A [email protected]

                    Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

                    W This user is from outside of this forum
                    W This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #245

                    With Teslas, Self Driving isn't even safer in pristine road conditions.

                    J 1 Reply Last reply
                    2
                    • G [email protected]

                      Anyone pretending AI has intelligence is a fucking idiot.

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #246

                      You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.

                      O R A 3 Replies Last reply
                      4
                      • B [email protected]

                        It's true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

                        A This user is from outside of this forum
                        A This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #247

                        What is "actual intelligence" then?

                        S B 2 Replies Last reply
                        0
                        • N [email protected]

                          That and there is literally no way to prove something is or isn't conscious. I can't even prove to another human being that I'm a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

                          Not saying I consider AI in it's current form to be conscious, more so the whole idea is just silly and unfalsifiable.

                          A This user is from outside of this forum
                          A This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #248

                          No idea why you're getting downvoted. People here don't seem to understand even the simplest concepts of consciousness.

                          N 1 Reply Last reply
                          0
                          • A [email protected]

                            You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.

                            O This user is from outside of this forum
                            O This user is from outside of this forum
                            [email protected]
                            wrote on last edited by [email protected]
                            #249

                            I think there's a strong strain of essentialist human chauvinism.

                            But it's more kinds of thing than LLM's are doing. Except in the case of llmbros fascists and other opt-outs.

                            1 Reply Last reply
                            2
                            • S [email protected]

                              My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

                              It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

                              O This user is from outside of this forum
                              O This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #250

                              Humans can be more than this. We do actively repress our most important intellectual capacuties.

                              That's how we get llm bros.

                              1 Reply Last reply
                              0
                              • T [email protected]

                                We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                https://archive.ph/Fapar

                                J This user is from outside of this forum
                                J This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #251

                                What I never understood about this argument is.....why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us.....why all of this isn't enough to fit the very self-explanatory term "artificial....intelligence". That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn't even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it's artificial. We've had AI in games for decades, it's not the sci-fi AI, but it's still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don't know when they're lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they're saying came from or that it's even a factoid, why is it so crazy when the machine does it?

                                I keep hearing the word "anthropomorphize" being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don't know if consciousness isn't just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don't really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we're so much better than other beings, to the point where we decide whether their existence is even recognizable?

                                L S 2 Replies Last reply
                                2
                                • A [email protected]

                                  What is "actual intelligence" then?

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #252

                                  Nobody knows for sure.

                                  1 Reply Last reply
                                  0
                                  • J [email protected]

                                    What I never understood about this argument is.....why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us.....why all of this isn't enough to fit the very self-explanatory term "artificial....intelligence". That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn't even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it's artificial. We've had AI in games for decades, it's not the sci-fi AI, but it's still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don't know when they're lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they're saying came from or that it's even a factoid, why is it so crazy when the machine does it?

                                    I keep hearing the word "anthropomorphize" being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don't know if consciousness isn't just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don't really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we're so much better than other beings, to the point where we decide whether their existence is even recognizable?

                                    L This user is from outside of this forum
                                    L This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #253

                                    You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.

                                    Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

                                    They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.

                                    Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

                                    But again like Meeseeks, they disappear and context window closes.

                                    Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

                                    It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.

                                    A J 2 Replies Last reply
                                    0
                                    • J [email protected]

                                      What I never understood about this argument is.....why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us.....why all of this isn't enough to fit the very self-explanatory term "artificial....intelligence". That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn't even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it's artificial. We've had AI in games for decades, it's not the sci-fi AI, but it's still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don't know when they're lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they're saying came from or that it's even a factoid, why is it so crazy when the machine does it?

                                      I keep hearing the word "anthropomorphize" being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don't know if consciousness isn't just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don't really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we're so much better than other beings, to the point where we decide whether their existence is even recognizable?

                                      S This user is from outside of this forum
                                      S This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #254

                                      I think your argument is a bit besides the point.

                                      The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

                                      But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

                                      For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

                                      The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

                                      All that is done in a separate training step, where essentially a new LLM is generated.

                                      J 1 Reply Last reply
                                      6
                                      • F [email protected]

                                        Actually it's a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you're interested in the topic, you could go read about them.

                                        K This user is from outside of this forum
                                        K This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #255

                                        I'm not attacking philosophical arguments between the 1950s and the 1980s.

                                        I'm pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.

                                        1 Reply Last reply
                                        2
                                        • T [email protected]

                                          We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                          But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                          This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                          So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                          Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                          Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                          https://archive.ph/Fapar

                                          D This user is from outside of this forum
                                          D This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #256

                                          Humans are also LLMs.

                                          We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.

                                          What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.

                                          S A J 3 Replies Last reply
                                          4
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups