We need to stop pretending AI is intelligent
-
I'd agree with you if I saw "hi's" and "her's" in the wild, but nope. I still haven't seen someone write "that car is her's".
Keep reading...
-
Do you think it's a matter of choosing a complexity to care about?
If you can formulate that sentence, you can handle "it's means it is". Come on. Or "common" if you prefer.
-
Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.
I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.
Standard English has such a long list of weird and contradictory roles
rules.
-
I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?
I believe what you say. I don't believe that is what the article is saying.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
wrote on last edited by [email protected]No, thats the point of the article. You also haven't really said much at all.
-
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
Clearly intelligent people mispell and have horrible grammar too.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
No the current branch of AI is very unlikely to result in artificial intelligence.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
This is so over simplified.
-
You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.
Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don't have intelligence.
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
No your failing the Eliza test and it is very easy for people to fall for it.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
And we "need" none of that to live. We just choose to use it.
-
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.
-
Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.
So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...
Not going to happen soon. It's the 90 10 problem.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
The probabilities of our sentence structure are a consequence of our speech, we aren't just trying to statistically match appropriate sounding words.
With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it's working with or "reasoning" even when it is marketed as "reasoning".
Sticking to textual content generation by LLM, you'll see that what is emitted is first and foremost structurally appropriate, but beyond that it's mostly "bonus" for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there's no logical connection between the two portions of the emitted output in that case.
-
No... There are a lot of radio shows that get scientists to speak.
Which ones are you listening to?
-
You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.
wrote on last edited by [email protected]Now there’s models that reason,
Well, no, that's mostly a marketing term applied to expending more tokens on generating intermediate text. It's basically writing a fanfic of what thinking on a problem would look like. If you look at the "reasoning" steps, you'll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.
-
With Teslas, Self Driving isn't even safer in pristine road conditions.
I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.
However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.
-
Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.
So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...
The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.
But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the 'right' thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can't figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.
I don't have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It's enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there's a traffic jam coming up (it might stop in time, but it certainly doesn't slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won't do it, which is nice when I can afford to be stupidly cautious).