Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S [email protected]

    I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
    https://ai-2027.com/

    H This user is from outside of this forum
    H This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #77

    Ask AI:
    Did you mean: irresponsible
    AI Overview
    The term "unresponsible" is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

    1 Reply Last reply
    2
    • F [email protected]

      When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #78

      I wasn't, and that wasn't my process at all. Go touch grass.

      S F 2 Replies Last reply
      2
      • S [email protected]

        I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
        https://ai-2027.com/

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #79

        Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

        This argument seems weak to me:

        So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

        You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

        I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

        I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

        H 1 Reply Last reply
        2
        • P [email protected]

          So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

          There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

          Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

          It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #80

          That was my first though too. But the author is:

          Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

          B 1 Reply Last reply
          2
          • C [email protected]

            Do you think most people reason well?

            The answer is why AI is so convincing.

            A This user is from outside of this forum
            A This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #81

            I think people are easily fooled. I mean look at the president.

            1 Reply Last reply
            1
            • T [email protected]

              If only there were a word, literally defined as:

              Made by humans, especially in imitation of something natural.

              serotoninswells@lemmy.worldS This user is from outside of this forum
              serotoninswells@lemmy.worldS This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #82

              Fair enough 🙂

              1 Reply Last reply
              0
              • W [email protected]

                Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

                I wonder how different it'll be in 500 years.

                S This user is from outside of this forum
                S This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #83

                Would you rather use the same contraction for both? Because "its" for "it is" is an even worse break from proper grammar IMO.

                mrscottytay@sh.itjust.worksM 1 Reply Last reply
                2
                • G [email protected]

                  Agreed.

                  When I was a kid we went to the library. If a card catalog didn't yield the book you needed, you asked the librarian. They often helped. No one sat around after the library wondering if the librarian was "truly intelligent".

                  These are tools. Tools slowly get better. Is a tool make life easier or your work better, you'll eventually use it.

                  Yes, there are woodworkers that eschew power tools but they are not typical. They have a niche market, and that's great, but it's a choice for the maker and user of their work.

                  head_socj@midwest.socialH This user is from outside of this forum
                  head_socj@midwest.socialH This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #84

                  I think tools misrepresents it. It seems more like we're in the transitional stage of providing massive amounts of data for LLMs to train on, until they can eventually develop enough cognition to train themselves, automate their own processes and upgrades, and eventually replace the need for human cognition. If anything, we are the tool now.

                  1 Reply Last reply
                  0
                  • flagstaff@programming.devF [email protected]

                    Fine, *could literally be.

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #85

                    The thing is, because Excel is Turing Complete, you can say this about literally anything that’s capable of running on a computer.

                    1 Reply Last reply
                    6
                    • P [email protected]

                      Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

                      L This user is from outside of this forum
                      L This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #86

                      Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

                      B P 2 Replies Last reply
                      7
                      • S [email protected]

                        That was my first though too. But the author is:

                        Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

                        B This user is from outside of this forum
                        B This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #87

                        Ever since the 20th century, there has been a diminishing expectation placed upon scientists to engage in philosophical thinking. My background is primarily in mathematics, physics, and philosophy. I can tell you from personal experience that many professional theoretical physicists spend a tremendous amount of time debating metaphysics while knowing almost nothing about it, often being totally unaware that they are even doing it. If cognitive neuroscience works anything like physics then it’s quite possible that the total exposure that this professor has had to scholarship on the philosophy of the mind was limited to one or two courses during his undergraduate.

                        1 Reply Last reply
                        5
                        • G [email protected]

                          I've never been fooled by their claims of it being intelligent.

                          Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

                          K This user is from outside of this forum
                          K This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #88

                          It very much isn't and that's extremely technically wrong on many, many levels.

                          Yet still one of the higher up voted comments here.

                          Which says a lot.

                          B H E 3 Replies Last reply
                          28
                          • L [email protected]

                            Wow. So when you typed that comment you were just predicting which words would be normal in this situation? Interesting delusion, but that's not how people think. We apply reasoning processes to the situation, formulate ideas about it, and then create a series of words that express our ideas. But our ideas exist on their own, even if we never end up putting them into words or actions. That's how organic intelligence differs from a Large Language Model.

                            K This user is from outside of this forum
                            K This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #89

                            Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

                            There's a very significant difference between training on completion and the way the world model actually functions once established.

                            L 1 Reply Last reply
                            0
                            • S [email protected]

                              And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #90

                              "…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

                              Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

                              S mr_satan@lemmy.zipM 2 Replies Last reply
                              10
                              • K [email protected]

                                Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

                                There's a very significant difference between training on completion and the way the world model actually functions once established.

                                L This user is from outside of this forum
                                L This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #91

                                No dude I'm not under that impression, and I'm not going to take an quiz from you to prove I understand how LLMs work. I'm fine with you not agreeing with me.

                                1 Reply Last reply
                                2
                                • flagstaff@programming.devF [email protected]

                                  ChatGPT 2 was literally an Excel spreadsheet.

                                  I guesstimate that it's effectively a supermassive autocomplete algo that uses some TOTP-like factor to help it produce "unique" output every time.

                                  And they're running into issues due to increasingly ingesting AI-generated data.

                                  Get your popcorn out! 🍿

                                  A This user is from outside of this forum
                                  A This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #92

                                  You're an idiot lmfao

                                  1 Reply Last reply
                                  2
                                  • T [email protected]

                                    We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                    https://archive.ph/Fapar

                                    aceshigh@lemmy.worldA This user is from outside of this forum
                                    aceshigh@lemmy.worldA This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #93

                                    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

                                    E: I use it to give me ideas that I then test out solo.

                                    B snapz@lemmy.worldS P X thetimeknife@lemmy.worldT 5 Replies Last reply
                                    26
                                    • B [email protected]

                                      "…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

                                      Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

                                      S This user is from outside of this forum
                                      S This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #94

                                      Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

                                      Not on my phone it didn't. It looks as you intended it.

                                      1 Reply Last reply
                                      0
                                      • S [email protected]

                                        Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

                                        This argument seems weak to me:

                                        So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                        You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

                                        I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

                                        I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

                                        H This user is from outside of this forum
                                        H This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #95

                                        Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

                                        You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

                                        S 1 Reply Last reply
                                        1
                                        • T [email protected]

                                          We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                          But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                          This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                          So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                          Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                          Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                          https://archive.ph/Fapar

                                          P This user is from outside of this forum
                                          P This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #96

                                          As someone who's had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn't or, particularly, can't be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don't have kids. The other side usually does.

                                          When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                          People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend's dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

                                          And as funny as it would be to argue that they're both sapient, but not sentient, I don't think that's the case. I think you can make the case that without true volition, AI is sentient but not sapient. I'd love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

                                          T russjr08@bitforged.spaceR joel_feila@lemmy.worldJ S T 6 Replies Last reply
                                          6
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups