Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M [email protected]

    May I present to you:

    The Marriam-Webster Dictionary

    https://www.merriam-webster.com/dictionary/artificial

    Definition #3b

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #183

    Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they're focusing on the wrong word and how that word is being conflated with something it's not.

    LLM's are artificial. They are a man made thing that is intended to fool man into believing they are something they aren't. What we're meant to be convinced they are is sapiently intelligent.

    Mimicry is not sapience and that's where the argument for LLM's being real honest to God AI falls apart.

    Sapience is missing from Generative LLM's. They don't actually think. They don't actually have motivation. What we're doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That's not what's happening. But some of us are convinced that it is, or that it's near enough that it doesn't matter.

    1 Reply Last reply
    2
    • A [email protected]

      Dude chatbots lie about their "internal reasoning process" because they don't really have one.

      Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It's not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.

      As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my "training data". Because I'm a person not a fucking statistical model.

      But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren't talking heads on television in order to discover that truth.

      S This user is from outside of this forum
      S This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #184

      Is that why you love saying touch grass so much? Because it's your own personal style and not because you think it's a popular thing to say?

      Or is it because you learned the fucking concept and not because it's been expressed too commonly in your "training data"? Honestly, it just sounds like you've heard too many people use that insult successfully and now you can't help but probabilistically express it after each comment lol.

      Maybe stop parroting other people and projecting that onto me and maybe you'd sound more convincing.

      A 1 Reply Last reply
      0
      • H [email protected]

        It is and it isn't. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

        Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #185

        From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.

        Though a lot of game AIs don't fit that description.

        1 Reply Last reply
        0
        • M [email protected]

          May I present to you:

          The Marriam-Webster Dictionary

          https://www.merriam-webster.com/dictionary/artificial

          Definition #3b

          E This user is from outside of this forum
          E This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #186

          Thanks. I stand corrected.

          1 Reply Last reply
          0
          • jackbydev@programming.devJ [email protected]

            Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?

            E This user is from outside of this forum
            E This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #187

            That's a red herring, bro. It's an analogy. You know that.

            1 Reply Last reply
            1
            • S [email protected]

              Is that why you love saying touch grass so much? Because it's your own personal style and not because you think it's a popular thing to say?

              Or is it because you learned the fucking concept and not because it's been expressed too commonly in your "training data"? Honestly, it just sounds like you've heard too many people use that insult successfully and now you can't help but probabilistically express it after each comment lol.

              Maybe stop parroting other people and projecting that onto me and maybe you'd sound more convincing.

              A This user is from outside of this forum
              A This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #188

              Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

              In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.

              And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

              S 1 Reply Last reply
              0
              • H [email protected]

                It is and it isn't. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

                Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

                E This user is from outside of this forum
                E This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #189

                I can agree with "things that try to imitate human intelligence" but not "human behavior". An Elmo doll laughs when you tickle it. That doesn't mean it exhibits artificial intelligence.

                1 Reply Last reply
                0
                • B [email protected]

                  That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself.

                  The model is data. It needs to be operated on to get information out. That means lots of JMPs.

                  If someone said viewing a gif is just a bunch of if-else's, that's also true. That the data in the gif isn't itself a bunch of if-else's isn't relevant.

                  Executing LLM'S is particularly JMP heavy. It's why you need massive fast ram because caching doesn't help them.

                  tmpod@lemmy.ptT This user is from outside of this forum
                  tmpod@lemmy.ptT This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #190

                  You're correct, but that's like saying along the lines of manufacturing a car is just bolting and soldering a bunch of stuff. It's technically true to some degree, but it's very disingenuous to make such a statement without being ironic. If you're making these claims, you're either incompetent or acting in bad faith.

                  I think there is a lot wrong with LLMs and how the public at large uses them, and even more so with how companies are developing and promoting them. But to spread misinformation and polute an already overcrowded space with junk is irresponsible at best.

                  1 Reply Last reply
                  0
                  • T [email protected]

                    We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                    https://archive.ph/Fapar

                    thetimeknife@lemmy.worldT This user is from outside of this forum
                    thetimeknife@lemmy.worldT This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #191

                    This era of "AI" is basically word problem databases with pictures. It is not intelligent. However we should not assume it will remain that way, forever or even long. I think its possible the next era of AI may be much closer to faking intelligence. HG Modernism did a video on Agentic AI, and with the next era focusing on error correction and adaptability, and increasingly complex LLM substructures underneath the hood replacing the typical large single model, it may be much harder to identify a machine "intelligence" in the near future. It will certainly make LLMs closer to what we imagine an AI to be like, and it will make them much more useful for scientific, industrial and administrative purposes.

                    1 Reply Last reply
                    0
                    • aceshigh@lemmy.worldA [email protected]

                      I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

                      E: I use it to give me ideas that I then test out solo.

                      thetimeknife@lemmy.worldT This user is from outside of this forum
                      thetimeknife@lemmy.worldT This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #192

                      Give us an example of how it helped you learn something. I promise I'm not asking so I can be an ass. Just more curious and wondering what its pulling from, or what inner problems its addressing.

                      1 Reply Last reply
                      0
                      • E [email protected]

                        Compose key?

                        tmpod@lemmy.ptT This user is from outside of this forum
                        tmpod@lemmy.ptT This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #193

                        https://en.wikipedia.org/wiki/Compose_key

                        It's a key that makes the next 2 or more keystrokes be dead key inserts that combineinto some character otherwise impossible to type.

                        In my case, my keyboard had a ≣ Menu key which I never used, so I remapped it to Compose.

                        1 Reply Last reply
                        1
                        • S [email protected]

                          The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.

                          K This user is from outside of this forum
                          K This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #194

                          It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

                          It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

                          N A S 3 Replies Last reply
                          7
                          • K [email protected]

                            So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

                            This is not a good argument.

                            B This user is from outside of this forum
                            B This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #195

                            philosopher

                            Here's why. It's a quote from a pure academic attempting to describe something practical.

                            K 1 Reply Last reply
                            0
                            • B [email protected]

                              philosopher

                              Here's why. It's a quote from a pure academic attempting to describe something practical.

                              K This user is from outside of this forum
                              K This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #196

                              The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn't do.

                              Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

                              1 Reply Last reply
                              4
                              • E [email protected]

                                I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.

                                A general AI does not need to be conscious.

                                N This user is from outside of this forum
                                N This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #197

                                That and there is literally no way to prove something is or isn't conscious. I can't even prove to another human being that I'm a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

                                Not saying I consider AI in it's current form to be conscious, more so the whole idea is just silly and unfalsifiable.

                                A 1 Reply Last reply
                                1
                                • I [email protected]

                                  So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.

                                  M This user is from outside of this forum
                                  M This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #198

                                  can say whatever the fuck we want. This isn't any kind of real issue. Think about it. If you went the rest of your life calling LLM's turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I'm so done with their crap

                                  1 Reply Last reply
                                  1
                                  • A [email protected]

                                    Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

                                    In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.

                                    And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #199

                                    Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

                                    Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.

                                    I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

                                    A 1 Reply Last reply
                                    0
                                    • T [email protected]

                                      We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                      But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                      This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                      So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                      Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                      Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                      https://archive.ph/Fapar

                                      S This user is from outside of this forum
                                      S This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #200

                                      My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

                                      It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

                                      P A M fishos@lemmy.worldF S 7 Replies Last reply
                                      23
                                      • F [email protected]

                                        When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

                                        N This user is from outside of this forum
                                        N This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #201

                                        By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.

                                        Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

                                        1 Reply Last reply
                                        1
                                        • H [email protected]

                                          I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

                                          N This user is from outside of this forum
                                          N This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #202

                                          So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?

                                          H 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups