We need to stop pretending AI is intelligent
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
This era of "AI" is basically word problem databases with pictures. It is not intelligent. However we should not assume it will remain that way, forever or even long. I think its possible the next era of AI may be much closer to faking intelligence. HG Modernism did a video on Agentic AI, and with the next era focusing on error correction and adaptability, and increasingly complex LLM substructures underneath the hood replacing the typical large single model, it may be much harder to identify a machine "intelligence" in the near future. It will certainly make LLMs closer to what we imagine an AI to be like, and it will make them much more useful for scientific, industrial and administrative purposes.
-
I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…
E: I use it to give me ideas that I then test out solo.
Give us an example of how it helped you learn something. I promise I'm not asking so I can be an ass. Just more curious and wondering what its pulling from, or what inner problems its addressing.
-
Compose key?
https://en.wikipedia.org/wiki/Compose_key
It's a key that makes the next 2 or more keystrokes be dead key inserts that combineinto some character otherwise impossible to type.
In my case, my keyboard had a ≣ Menu key which I never used, so I remapped it to Compose.
-
The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.
wrote on last edited by [email protected]It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.
-
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.
This is not a good argument.
philosopher
Here's why. It's a quote from a pure academic attempting to describe something practical.
-
philosopher
Here's why. It's a quote from a pure academic attempting to describe something practical.
The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn't do.
Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.
-
I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.
A general AI does not need to be conscious.
wrote on last edited by [email protected]That and there is literally no way to prove something is or isn't conscious. I can't even prove to another human being that I'm a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?
Not saying I consider AI in it's current form to be conscious, more so the whole idea is just silly and unfalsifiable.
-
So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.
can say whatever the fuck we want. This isn't any kind of real issue. Think about it. If you went the rest of your life calling LLM's turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I'm so done with their crap
-
Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?
In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.
And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?
Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.
Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.
I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
-
When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.
By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.
Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.
-
I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best
So, you’re listening to journalists and fiction writers try to interpret things scientists do and taking that as hard science?
-
Philosophers are so desperate for humans to be special.
How is outputting things based on things it has learned any different to what humans do?We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.
I don't think we have achieved "AGI" but I do think this argument is stupid.
wrote on last edited by [email protected]Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?
-
Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.
Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.
I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.
wrote on last edited by [email protected]Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?
Yep, that's a bingo!
Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.
There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.
-
It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.
wrote on last edited by [email protected]This post did not contain any content. -
It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.
If you can bear the cringe of the interviewer, there's a good interview with Penrose that goes on the same direction: https://m.youtube.com/watch?v=e9484gNpFF8
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Anyone pretending AI has intelligence is a fucking idiot.
-
Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?
Yep, that's a bingo!
Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.
There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.
Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.
Show your proof, then. I've already said what I need to say about this topic.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.
We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.
Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.
It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.
This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.
-
It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.
I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
I know it doesn't mean it's not dangerous, but this article made me feel better.