Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
Other way around. The claimed meaningful change (reasoning) has not occurred.
Meaningful change is not happening because of this paper, either, I don't know why you're playing semantic games with me though.
-
I think it's an easy mistake to confuse sentience and intelligence. It happens in Hollywood all the time - "Skynet began learning at a geometric rate, on July 23 2004 it became self-aware" yadda yadda
But that's not how sentience works. We don't have to be as intelligent as Skynet supposedly was in order to be sentient. We don't start our lives as unthinking robots, and then one day - once we've finally got a handle on calculus or a deep enough understanding of the causes of the fall of the Roman empire - we suddenly blink into consciousness. On the contrary, even the stupidest humans are accepted as being sentient. Even a young child, not yet able to walk or do anything more than vomit on their parents' new sofa, is considered as a conscious individual.
So there is no reason to think that AI - whenever it should be achieved, if ever - will be conscious any more than the dumb computers that precede it.
Good point.
-
Meaningful change is not happening because of this paper, either, I don't know why you're playing semantic games with me though.
I don't know why you're playing semantic games
I'm trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
-
I don't know why you're playing semantic games
I'm trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
That's not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn't apple.
-
I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.
I don't mean it to extol LLM's but rather to denigrate humans. How many of us are self imprisoned in echo chambers so we can have our feelings validated to avoid the uncomfortable feeling of thinking critically and perhaps changing viewpoints?
Humans have the ability to actually think, unlike LLM's. But it's frightening how far we'll go to make sure we don't.
-
I'd encourage you to research more about this space and learn more.
As it is, the statement "Markov chains are still the basis of inference" doesn't make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that's also unrelated because these models are not RL agents, they're supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it's not really used for inference.
I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.
Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does "AI" escape the inherent limits of statistical inference?
-
LOOK MAA I AM ON FRONT PAGE
wrote on last edited by [email protected]WTF does the author think reasoning is
-
Except that wouldn't explain conscience. There's absolutely no need for conscience or an illusion(*) of conscience. Yet we have it.
- arguably, conscience can by definition not be an illusion. We either perceive "ourselves" or we don't
How do you define consciousness?
-
How do you define consciousness?
It's the thing that the only person who can know for sure you have it is you yourself. If you have to ask, I might have to assume you could be a biological machine.
-
It's the thing that the only person who can know for sure you have it is you yourself. If you have to ask, I might have to assume you could be a biological machine.
Is that useful for completing tasks?