Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. We need to stop pretending AI is intelligent

We need to stop pretending AI is intelligent

Scheduled Pinned Locked Moved Technology
technology
328 Posts 147 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • O [email protected]

    Honestly I don't think we'll have AGI until we can fully merge meat space and cyber space. Once we can simply plug our brains into a computer and fully interact with it then we may see AGI.

    Obviously we're not where near that level of man machine integration, I doubt we'll see even the slightest chance of it being possible for at least 10 years and the very earliest. But when we do get there it's a distinct chance that it's more of a Borg situation where the computer takes a parasitic role than a symbiotic role.

    But by the time we are able to fully integrate computers into our brains I believe we will have trained A.I. systems enough to learn by interaction and observation. So being plugged directly into the human brain it could take prior knowledge of genome mapping and other related tasks and apply them to mapping our brains and possibly growing artificial brains to achieve self awareness and independent thought.

    Or we'll just nuke ourselves out of existence and that will be that.

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #137

    Okay man.

    1 Reply Last reply
    0
    • S [email protected]

      But that's exactly how we learn stuff, as well. Artificial neural networks are modelled after how our neuron affect each other while we learn and store memories.

      T This user is from outside of this forum
      T This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #138

      Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

      I don't think anybody knows how we actually, really learn. I'm not a neuro scientist (I'm a computer scientist specialised in AI) but I don't think the mechanism of learning is that well understood.

      AI hype-people will say that it's "like a neural network" but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

      1 Reply Last reply
      3
      • T [email protected]

        We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

        But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

        This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

        So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

        Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

        Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

        https://archive.ph/Fapar

        gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
        gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #139

        “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” ― Upton Sinclair

        AI company's revenue depends on them claiming that AI can replace human workers and take over their jobs. It will be very difficult to make them understand that AI is not ... good.

        1 Reply Last reply
        0
        • P [email protected]

          That sounds fucking dangerous... You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor...

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #140

          I mean, sure, but that's really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

          1 Reply Last reply
          5
          • B [email protected]

            I think we should start by not following this marketing speak. The sentence "AI isn't intelligent" makes no sense. What we mean is "LLMs aren't intelligent".

            U This user is from outside of this forum
            U This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #141

            I make the point to allways refer to it as LLM exactly to make the point that it's not an Inteligence.

            1 Reply Last reply
            7
            • L [email protected]

              Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #142

              AI didn't write the insurance policy. It only helped him search for the best deal. That's like saying your insurance company will cancel you because you used a phone to comparison shop.

              1 Reply Last reply
              0
              • A [email protected]

                I haven't noticed this behavior coming from scientists particularly frequently - the ones I've talked to generally accept that consciousness is somehow the product of the human brain, the human brain is performing computation and obeys physical law, and therefore every aspect of the human brain, including the currently unknown mechanism that creates consciousness, can in principle be modeled arbitrarily accurately using a computer. They see this as fairly straightforward, but they have no desire to convince the public of it.

                This does lead to some counterintuitive results. If you have a digital AI, does a stored copy of it have subjective experience despite the fact that its state is not changing over time? If not, does a series of stored copies representing, losslessly, a series of consecutive states of that AI? If not, does a computer currently in one of those states and awaiting an instruction to either compute the next state or load it from the series of stored copies? If not (or if the answer depends on whether it computes the state or loads it) then is the presence or absence of subjective experience determined by factors outside the simulation, e.g. something supernatural from the perspective of the AI? I don't think such speculation is useful except as entertainment - we simply don't know enough yet to even ask the right questions, let alone answer them.

                H This user is from outside of this forum
                H This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #143

                I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

                N A 2 Replies Last reply
                0
                • M [email protected]

                  Wdym? That depends on what I'm working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like "AI"). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!

                  H This user is from outside of this forum
                  H This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #144

                  What do you mean what do I mean? You were the one that said about ideas in the first place...

                  A 1 Reply Last reply
                  0
                  • K [email protected]

                    It very much isn't and that's extremely technically wrong on many, many levels.

                    Yet still one of the higher up voted comments here.

                    Which says a lot.

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #145

                    Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he's not technically wrong. Of course, it's more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

                    tmpod@lemmy.ptT 1 Reply Last reply
                    4
                    • zacryon@feddit.orgZ [email protected]

                      We don't even have a clear definition of what "intelligence" even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

                      D This user is from outside of this forum
                      D This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #146

                      Even if we did if it's human it can't live on this planet and claim it's intelligent. Just look around and you will know why.

                      1 Reply Last reply
                      1
                      • K [email protected]

                        It very much isn't and that's extremely technically wrong on many, many levels.

                        Yet still one of the higher up voted comments here.

                        Which says a lot.

                        H This user is from outside of this forum
                        H This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #147

                        Calling these new LLM's just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

                        This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

                        5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU's.

                        J 1 Reply Last reply
                        0
                        • B [email protected]

                          I think we should start by not following this marketing speak. The sentence "AI isn't intelligent" makes no sense. What we mean is "LLMs aren't intelligent".

                          I This user is from outside of this forum
                          I This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #148

                          So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.

                          T H A M 4 Replies Last reply
                          15
                          • aceshigh@lemmy.worldA [email protected]

                            I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

                            I This user is from outside of this forum
                            I This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #149

                            If it's just mirroring you one could argue you don't really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that's just about the perfect use for an LLM if I could think of any.

                            1 Reply Last reply
                            1
                            • I [email protected]

                              So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.

                              T This user is from outside of this forum
                              T This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #150

                              To be fair, the term "AI" has always been used in an extremely vague way.

                              NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we've been referring to those as "AI" for decades without anybody taking an issue with it.

                              M B S 3 Replies Last reply
                              25
                              • I [email protected]

                                So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.

                                H This user is from outside of this forum
                                H This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #151

                                LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn't AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

                                E 1 Reply Last reply
                                2
                                • T [email protected]

                                  To be fair, the term "AI" has always been used in an extremely vague way.

                                  NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we've been referring to those as "AI" for decades without anybody taking an issue with it.

                                  M This user is from outside of this forum
                                  M This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #152

                                  I don't think the term AI has been used in a vague way, it's that there's a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

                                  Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it's a bunny with a costume on. LLMs on a technical level fit this definition.

                                  The other definition is man made. Artificial diamonds are a great example of this, they're still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

                                  My pet theory is science fiction got the general populace to think of artificial intelligence to be using the "man-made" definition instead of the "fake" definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

                                  E 1 Reply Last reply
                                  9
                                  • T [email protected]

                                    We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                    https://archive.ph/Fapar

                                    A This user is from outside of this forum
                                    A This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #153

                                    It's only as intelligent as the people that control and regulate it.

                                    Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.

                                    Like we know they're meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn't stop these sacks of shit from moving full speed with implementing this kind of thing.

                                    It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they're doing some things to users that are harmful, but didn't seem problematic during testing?

                                    Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people's brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that "what must be done," also makes them a ton of money and harms people using their products.

                                    It's so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.

                                    If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you're trying to hold back progress.

                                    However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.

                                    What a stupid and convoluted way to express you just don't like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!

                                    Such is the reality of the chosen elite, steering us towards greatness.

                                    What's really "funny" (in a we're all doomed sort of way) is that while writing this all out, I realized the "chosen elite" controlling tech and policy actually perfectly embody the current problem with AI and bias.

                                    Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.

                                    1 Reply Last reply
                                    5
                                    • B [email protected]

                                      Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he's not technically wrong. Of course, it's more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

                                      tmpod@lemmy.ptT This user is from outside of this forum
                                      tmpod@lemmy.ptT This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #154

                                      That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself. There's no translation of weights to jumps in transformers and the underlying attention mechanisms.

                                      I suggest reading https://en.m.wikipedia.org/wiki/Transformer_(deep_learning_architecture)

                                      B 1 Reply Last reply
                                      0
                                      • mr_satan@lemmy.zipM [email protected]

                                        Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

                                        However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

                                        tmpod@lemmy.ptT This user is from outside of this forum
                                        tmpod@lemmy.ptT This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #155

                                        I've been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don't use it for ellipsis because they're not visually clearer nor shorter to type.

                                        E 1 Reply Last reply
                                        1
                                        • T [email protected]

                                          We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

                                          But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

                                          This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

                                          So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

                                          Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

                                          Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

                                          https://archive.ph/Fapar

                                          E This user is from outside of this forum
                                          E This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by [email protected]
                                          #156

                                          I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.

                                          A general AI does not need to be conscious.

                                          N 1 Reply Last reply
                                          9
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups