Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
Dog has a very clear definition, so when you call a sausage in a bun a "Hot Dog", you are actually a fool.
Smart has a very clear definition, so no, you do not have a "Smart Phone" in your pocket.
Also, that is not the definition of intelligence. But the crux of the issue is that you are making up a definition for AI that suits your needs.
Misconstruing how language works isn't an argument for what an existing and established word means.
I'm sure that argument made you feel super clever but it's nonsense.
I sourced by definition from authoritative sources. The fact that you didn't even bother to verify that or provide an alternative authoritative definition tells me all I need to know about the value in further discussion with you.
-
Here’s the thing, I’m not against LLMs and dispersion for things they can actually be used for, they have potential for real things, just not at all the things you pretend exist. Neural implants aren’t AI. An intelligence is self aware, if we achieved AI it wouldn’t be a program. You’re misconstruing Virtual Intelligence for artificial intelligence and you don’t even understand what a virtual intelligence is. You’re simply delusional in what you believe computer science and technology is, how it works, and what it’s capable of.
I’m not talking about neural interfaces. I’m talking about organiod intelligence.
I am a computer scientist with lab experience in this. I’m not pulling this out of my ass. I’m drawing from direct experience in development.
-
I don’t make money, it’s something I do for personal enjoyment, that’s the entire purpose of art, it’s something I also use algorithmic processing to do. I’m not going to hand over my enjoyment to have a servitor do something for me to take credit for, I prefer to use my brain, not replace it.
No one told you to hand it over. A technology being able to do something does not require you to use it. And people misusing the technology to feign talent is a reflection of the people- not the tech.
-
I’m not talking about neural interfaces. I’m talking about organiod intelligence.
I am a computer scientist with lab experience in this. I’m not pulling this out of my ass. I’m drawing from direct experience in development.
Yeah, that’s the problem with the field, too many delusional people trying to find god in a computer because they didn’t understand what Asimov was actually writing about.
-
No one told you to hand it over. A technology being able to do something does not require you to use it. And people misusing the technology to feign talent is a reflection of the people- not the tech.
It’s not even to feign talent, it’s people trying to replace the brain instead of using applicable tools to help us advance and progress, you’re just advertising a product.
-
Yeah, that’s the problem with the field, too many delusional people trying to find god in a computer because they didn’t understand what Asimov was actually writing about.
wrote on last edited by [email protected]That it has to be nothing or everything with you, decision trees or God himself, is the likely foundation of your inability to have simple objective take on the existing technology and its capabilities. It’s giving bi-polar.
Now I’m not uninformed- I’m too informed!! LoL. That goalpost just shifted right across the field, and still you cannot admit to your ignorance.
-
Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.
For me it kinda went the other way, I'm almost convinced that human intelligence is the same pattern repeating, just more general (yet)
-
It’s not even to feign talent, it’s people trying to replace the brain instead of using applicable tools to help us advance and progress, you’re just advertising a product.
People have been presenting the work of others as their own for all of history. All that changed was a new tool was found to do that. But at least these are a form of derivative works, and not just putting their name directly on someone else’s carbon copy.
-
That it has to be nothing or everything with you, decision trees or God himself, is the likely foundation of your inability to have simple objective take on the existing technology and its capabilities. It’s giving bi-polar.
Now I’m not uninformed- I’m too informed!! LoL. That goalpost just shifted right across the field, and still you cannot admit to your ignorance.
wrote on last edited by [email protected]You haven’t made any point or even expressed an understanding of how these programs work. You’ve just been evangelizing about how AI is great, I genuinely don’t believe you understand what you’re talking about because you’ve expressed literally no proper understanding or explanation of your points outside of using a scene from I, Robot which kind of makes you look like you entirely misconstrue the concepts you’re sucking the dick of.
What kind of computer sciences do you work with as a profession? What is your applicable lab work?
-
People have been presenting the work of others as their own for all of history. All that changed was a new tool was found to do that. But at least these are a form of derivative works, and not just putting their name directly on someone else’s carbon copy.
wrote on last edited by [email protected]Tell that to Studio Ghibli. Also, people being shitty is not a good excuse for people to be shitty, you’re advocating to make it easier to enable people to be shitty.
-
You haven’t made any point or even expressed an understanding of how these programs work. You’ve just been evangelizing about how AI is great, I genuinely don’t believe you understand what you’re talking about because you’ve expressed literally no proper understanding or explanation of your points outside of using a scene from I, Robot which kind of makes you look like you entirely misconstrue the concepts you’re sucking the dick of.
What kind of computer sciences do you work with as a profession? What is your applicable lab work?
I’m not evangelizing. You incorrectly stated the limitations and development paths of the tech, and I corrected you.
Again with the religious verbiage from you. But I’m the one proselytizing?
It’s not nothing- it’s an impressive feat of technology that’s still in its infancy. It’s also not everything, and not anywhere close to a reasoning mind at this point. You are obsessive with extremes.
-
I’m not evangelizing. You incorrectly stated the limitations and development paths of the tech, and I corrected you.
Again with the religious verbiage from you. But I’m the one proselytizing?
It’s not nothing- it’s an impressive feat of technology that’s still in its infancy. It’s also not everything, and not anywhere close to a reasoning mind at this point. You are obsessive with extremes.
wrote on last edited by [email protected]You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
-
Tell that to Studio Ghibli. Also, people being shitty is not a good excuse for people to be shitty, you’re advocating to make it easier to enable people to be shitty.
wrote on last edited by [email protected]Studio Ghibli does not have exclusive rights to their style- whether it’s used by a person or an AI to inspire a new image. Those are derivative works. Totally legal. Arguably ethical. If it’s not a direct copy, how has the studio been harmed? What work of theirs was diminished?
I’m advocating for tools. How people use those tools is on them.
-
You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
No, I’m not gonna dox myself.
Reasoning for what? What details are you needing for clarification?
-
Studio Ghibli does not have exclusive rights to their style- whether it’s used by a person or an AI to inspire a new image. Those are derivative works. Totally legal. Arguably ethical. If it’s not a direct copy, how has the studio been harmed? What work of theirs was diminished?
I’m advocating for tools. How people use those tools is on them.
I disagree.
-
You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
Actually, you’re out of your depth, and I think you’ve been outed enough. We’re done, and I’m blocking.
-
No, I’m not gonna dox myself.
Reasoning for what? What details are you needing for clarification?
Let’s start simple. How do these programs work? Where do they get their data and how is it applied? And a general field of work is not doxxing, you’re just dodging accountability.
-
Actually, you’re out of your depth, and I think you’ve been outed enough. We’re done, and I’m blocking.
The sure sign of confidence, you’ve definitely shown me how stupid I am.
-
The architecture of these LRMs may make monkeys fly out of my butt. It hasn't been proven that the architecture doesn't allow it.
You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can't.
wrote on last edited by [email protected]that's very true, I'm just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it's not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
-
People think they want AI, but they don’t even know what AI is on a conceptual level.
They want something like the Star Trek computer or one of Tony Stark's AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say "model solved" or they can show a test simulation where the ship doesn't explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).
AIs that are smarter than us but for some reason don't replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).