Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • aceshigh@lemmy.worldA [email protected]

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    snapz@lemmy.worldS This user is from outside of this forum
    snapz@lemmy.worldS This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #102

    This is very interesting... because the general saying is that AI is convincing for non experts in the field it's speaking about. So in your specific case, you are actually saying that you aren't an expert on yourself, therefore the AI's assessment is convincing to you. Not trying to upset, it's genuinely fascinating how that theory is true here as well.

    aceshigh@lemmy.worldA 1 Reply Last reply
    17
    • T [email protected]

      We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

      But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

      This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

      So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

      Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

      Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

      https://archive.ph/Fapar

      bogasse@lemmy.mlB This user is from outside of this forum
      bogasse@lemmy.mlB This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #103

      The idea that RAGs "extend their memory" is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

      1 Reply Last reply
      14
      • B [email protected]

        This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

        jackbydev@programming.devJ This user is from outside of this forum
        jackbydev@programming.devJ This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #104

        Asking a question and then immediately answering it? That's AI-speak.

        HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

        1 Reply Last reply
        6
        • H [email protected]

          But your brain should.

          jackbydev@programming.devJ This user is from outside of this forum
          jackbydev@programming.devJ This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #105

          Yours didn't and read it just fine.

          E 1 Reply Last reply
          1
          • H [email protected]

            It's called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can't write for beans.

            jackbydev@programming.devJ This user is from outside of this forum
            jackbydev@programming.devJ This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #106

            Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well

            1 Reply Last reply
            0
            • B [email protected]

              "…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

              Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

              mr_satan@lemmy.zipM This user is from outside of this forum
              mr_satan@lemmy.zipM This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #107

              Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

              However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

              sternhammer@aussie.zoneS tmpod@lemmy.ptT 2 Replies Last reply
              3
              • T [email protected]

                I'm a computer scientist that has a child and I don't think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don't see in AI.

                I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

                The core of your claim is basically that "people who don't think AI is sentient don't really understand sentience". I think that's both reductionist and, frankly, a bit arrogant.

                J This user is from outside of this forum
                J This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #108

                Couldn't agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don't think this is one of them.

                Kids are certainly building a vocabulary and learning about the world, but LLMs don't learn.

                S 1 Reply Last reply
                6
                • P [email protected]

                  As someone who's had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn't or, particularly, can't be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don't have kids. The other side usually does.

                  When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                  People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend's dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

                  And as funny as it would be to argue that they're both sapient, but not sentient, I don't think that's the case. I think you can make the case that without true volition, AI is sentient but not sapient. I'd love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

                  russjr08@bitforged.spaceR This user is from outside of this forum
                  russjr08@bitforged.spaceR This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #109

                  Your son and daughter will continue to learn new things as they grow up, a LLM cannot learn new things on its own. Sure, they can repeat things back to you that are within the context window (and even then, a context window isn't really inherent to a LLM - its just a window of prior information being fed back to them with each request/response, or "turn" as I believe is the term) and what is in the context window can even influence their responses. But in order for a LLM to "learn" something, it needs to be retrained with that information included in the dataset.

                  Whereas if your kids were to say, touch a sharp object that caused them even slight discomfort, they would eventually learn to stop doing that because they'll know what the outcome is after repetition. You could argue that this looks similar to the training process of a LLM, but the difference is that a LLM cannot do this on its own (and I would not consider wiring up a LLM via an MCP to a script that can trigger a re-train + reload to be it doing it on its own volition). At least, not in our current day. If anything, I think this is more of a "smoking gun" than the argument of "LLMs are just guessing the next best letter/word in a given sequence".

                  Don't get me wrong, I'm not someone who completely hates LLMs / "modern day AI" (though I do hate a lot of the ways it is used, and agree with a lot of the moral problems behind it), I find the tech to be intriguing but it's a ("very fancy") simulation. It is designed to imitate sentience and other human-like behavior. That, along with human nature's tendency to anthropomorphize things around us (which is really the biggest part of this IMO), is why it tends to be very convincing at times.

                  That is my take on it, at least. I'm not a psychologist/psychiatrist or philosopher.

                  1 Reply Last reply
                  1
                  • S [email protected]

                    Would you rather use the same contraction for both? Because "its" for "it is" is an even worse break from proper grammar IMO.

                    mrscottytay@sh.itjust.worksM This user is from outside of this forum
                    mrscottytay@sh.itjust.worksM This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #110

                    Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.

                    Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

                    Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.

                    I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.

                    E H 2 Replies Last reply
                    0
                    • Z [email protected]

                      No you think according to the chemical proteins floating around your head. You don't even know he decisions your making when you make them.

                      https://www.unsw.edu.au/newsroom/news/2019/03/our-brains-reveal-our-choices-before-were-even-aware-of-them--st

                      You're a meat based copy machine with a built in justification box.

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #111

                      You're a meat based copy machine with a built in justification box.

                      Except of course that humans invented language in the first place. So uh, if all we can do is copy, where do you suppose language came from? Ancient aliens?

                      Z 1 Reply Last reply
                      2
                      • B [email protected]

                        Are we twins? I do the exact same and for around a year now, I've also found it pretty helpful.

                        L This user is from outside of this forum
                        L This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #112

                        I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it's just an inner dialogue enhancer

                        1 Reply Last reply
                        5
                        • A [email protected]

                          Yes, the first step to determining that AI has no capability for cognition is apparently to admit that neither you nor anyone else has any real understanding of what cognition* is or how it can possibly arise from purely mechanistic computation (either with carbon or with silicon).

                          Given the paramount importance of the human senses and emotion for consciousness to “happen”

                          Given? Given by what? Fiction in which robots can't comprehend the human concept called "love"?

                          *Or "sentience" or whatever other term is used to describe the same concept.

                          H This user is from outside of this forum
                          H This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #113

                          This is always my point when it comes to this discussion. Scientists tend to get to the point of discussion where consciousness is brought up then start waving their hands and acting as if magic is real.

                          A 1 Reply Last reply
                          0
                          • P [email protected]

                            As someone who's had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn't or, particularly, can't be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don't have kids. The other side usually does.

                            When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                            People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend's dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

                            And as funny as it would be to argue that they're both sapient, but not sentient, I don't think that's the case. I think you can make the case that without true volition, AI is sentient but not sapient. I'd love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

                            joel_feila@lemmy.worldJ This user is from outside of this forum
                            joel_feila@lemmy.worldJ This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #114

                            Not to get philosophical but to answer you we need to answer what is sentient.

                            Is it just observable behavior? If so then wouldn't Kermit the frog be sentient?

                            Or does sentience require something more, maybe qualia or some othet subjective.

                            If your son says "dad i got to go potty" is that him just using a llm to learn those words equals going to tge bathroom? Or is he doing something more?

                            1 Reply Last reply
                            6
                            • P [email protected]

                              As someone who's had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn't or, particularly, can't be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don't have kids. The other side usually does.

                              When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                              People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend's dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

                              And as funny as it would be to argue that they're both sapient, but not sentient, I don't think that's the case. I think you can make the case that without true volition, AI is sentient but not sapient. I'd love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #115

                              I'd love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

                              Not that person, but an Interesting lecture on that topic

                              1 Reply Last reply
                              0
                              • M [email protected]

                                Most people, evidently including you, can only ever recycle old ideas. Like modern "AI". Some of us can concieve new ideas.

                                H This user is from outside of this forum
                                H This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #116

                                What new idea exactly are you proposing?

                                M 1 Reply Last reply
                                0
                                • aceshigh@lemmy.worldA [email protected]

                                  I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

                                  E: I use it to give me ideas that I then test out solo.

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #117

                                  That sounds fucking dangerous... You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor...

                                  S 1 Reply Last reply
                                  26
                                  • T [email protected]

                                    We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                    https://archive.ph/Fapar

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #118

                                    The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

                                    zacryon@feddit.orgZ E 2 Replies Last reply
                                    2
                                    • H [email protected]

                                      What new idea exactly are you proposing?

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #119

                                      Wdym? That depends on what I'm working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like "AI"). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!

                                      S H 2 Replies Last reply
                                      3
                                      • S [email protected]

                                        The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

                                        zacryon@feddit.orgZ This user is from outside of this forum
                                        zacryon@feddit.orgZ This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #120

                                        We don't even have a clear definition of what "intelligence" even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

                                        D 1 Reply Last reply
                                        10
                                        • mr_satan@lemmy.zipM [email protected]

                                          Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

                                          However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

                                          sternhammer@aussie.zoneS This user is from outside of this forum
                                          sternhammer@aussie.zoneS This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #121

                                          I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

                                          The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

                                          mr_satan@lemmy.zipM 1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups