We need to stop pretending AI is intelligent
-
Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.
I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.
Standard English has such a long list of weird and contradictory roles
rules.
-
I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?
I believe what you say. I don't believe that is what the article is saying.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
wrote on last edited by [email protected]No, thats the point of the article. You also haven't really said much at all.
-
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
Clearly intelligent people mispell and have horrible grammar too.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
No the current branch of AI is very unlikely to result in artificial intelligence.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
This is so over simplified.
-
You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.
Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don't have intelligence.
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
No your failing the Eliza test and it is very easy for people to fall for it.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
And we "need" none of that to live. We just choose to use it.
-
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.
-
Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.
So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...
Not going to happen soon. It's the 90 10 problem.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
The probabilities of our sentence structure are a consequence of our speech, we aren't just trying to statistically match appropriate sounding words.
With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it's working with or "reasoning" even when it is marketed as "reasoning".
Sticking to textual content generation by LLM, you'll see that what is emitted is first and foremost structurally appropriate, but beyond that it's mostly "bonus" for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there's no logical connection between the two portions of the emitted output in that case.
-
No... There are a lot of radio shows that get scientists to speak.
Which ones are you listening to?
-
You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.
wrote on last edited by [email protected]Now there’s models that reason,
Well, no, that's mostly a marketing term applied to expending more tokens on generating intermediate text. It's basically writing a fanfic of what thinking on a problem would look like. If you look at the "reasoning" steps, you'll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.
-
With Teslas, Self Driving isn't even safer in pristine road conditions.
I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.
However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.
-
Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.
So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...
The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.
But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the 'right' thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can't figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.
I don't have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It's enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there's a traffic jam coming up (it might stop in time, but it certainly doesn't slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won't do it, which is nice when I can afford to be stupidly cautious).
-
The other thing that most people don't focus on is how we train LLMs.
We're basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they've found a snack, and when the bird gets close enough the snake strikes and eats the bird.
Now, I'm not saying we're building something that is designed to kill us. But, I am saying that we're putting enormous effort into building something that can fool us into thinking it's intelligent. We're not trying to build something that can do something intelligent. We're instead trying to build something that mimics intelligence.
What we're effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What's crazy about that is that we're not building this to fool a predator so that we're not in danger. We're not doing it to fool prey, so we can catch and eat them more easily. We're doing it so we can fool ourselves.
It's like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn't work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn't intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.
To the extent it is people trying to fool people, it's rich people looking to fool poorer people for the most part.
To the extent it's actually useful, it's to replace certain systems.
Think of the humble phone tree, designed to make it so humans aren't having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you'd like something to distill the relevant 3 or 4 sentences according to your query.
So there are useful interactions.
However it's also true that it's dangerous because the "make user approve of the interaction" can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.
-
Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.
All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it
wrote on last edited by [email protected]What language was the first language based upon?
What music influenced the first song performed?
What art influenced the first cave painter?