Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Scheduled Pinned Locked Moved Technology
technology
210 Posts 93 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A [email protected]

    This famous passage from the opening of Moby-Dick by Herman Melville is rich in literary style but not necessarily constructed as an argument, which makes identifying formal logical fallacies a nuanced task. That said, we can identify a few rhetorical or informal fallacies—not because the text is poorly reasoned, but because it makes some sweeping or questionable assumptions in a stylistically persuasive way. Here's a breakdown:


    1. Hasty Generalization

    "Almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me."

    • Fallacy: The narrator generalizes his deeply personal and specific emotional reaction to the sea as something nearly all men experience.
    • Why it’s fallacious: No evidence is presented to support this broad claim. It is based purely on the narrator’s subjective experience and intuition.

    2. False Cause (Post Hoc)

    "It is a way I have of driving off the spleen and regulating the circulation."

    • Fallacy: Implies a causal relationship between going to sea and improved physical/emotional health.
    • Why it’s fallacious: While sailing may coincidentally improve his mood or circulation, the text offers no medical or psychological basis to prove a direct causal link.

    3. Appeal to Emotion

    "...especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people’s hats off..."

    • Fallacy: Uses vivid emotional language to justify behavior or state of mind rather than logical reasoning.
    • Why it’s fallacious: It appeals to the reader’s empathy or amusement to legitimize the narrator’s melancholy and coping strategy, rather than offering a rational justification.

    4. False Analogy

    "With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship."

    • Fallacy: Compares going to sea with committing suicide as if they are equivalent responses to despair.
    • Why it’s fallacious: The analogy is emotionally compelling, but equating self-harm with a change in setting (boarding a ship) overlooks major differences in motive and consequence.

    Summary:

    Fallacy Example from Text Explanation
    Hasty Generalization “Almost all men... cherish very nearly the same feelings...” Overgeneralizes personal feeling
    False Cause “...driving off the spleen and regulating the circulation.” Implies unproven health benefits
    Appeal to Emotion “...prevent me from... knocking people’s hats off.” Justifies behavior with emotion
    False Analogy “Cato throws himself... I quietly take to the ship.” Equates suicide and sailing

    Would you like me to rewrite the passage without the fallacies, or is this for literary analysis?

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #138

    Thank you! This passage is again drawn from René Descartes' Meditations on First Philosophy, and it outlines his method of radical doubt as a means to establish certainty in knowledge. While the reasoning is carefully crafted and influential in the history of philosophy, it still contains a few logical fallacies or reasoning weaknesses that can be critically assessed.


    🔍 Identified Logical Fallacies and Weak Reasoning:


    1. Hasty Generalization

    "Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once."

    • Why it’s a fallacy: The fact that the senses occasionally deceive (e.g. in illusions, dreams, or misperceptions) does not justify rejecting them entirely.
    • Type: Hasty generalization – concluding that because something is sometimes unreliable, it must always be unreliable.

    2. False Analogy

    "...it is unwise to trust completely those who have deceived us even once."

    • Why it’s a fallacy: Comparing senses (which are not agents but biological faculties) to people who deceive is not a valid analogy. People deceive with intent; senses make errors unintentionally.
    • Type: False analogy – treating unlike things as if they were meaningfully similar.

    3. Slippery Slope (Metaphorical)

    "Once the foundations of a building have been undermined, the rest collapses of its own accord..."

    • Why it’s a fallacy: This implies that if some foundational beliefs are doubted, then all other beliefs must collapse, without demonstrating that all knowledge is entirely dependent on those foundations.
    • Type: Slippery slope – assuming that undermining one part inevitably leads to a total collapse.

    4. Begging the Question (Circular Reasoning)

    "My reason tells me... I should withhold assent from [beliefs] that are not completely certain and indubitable."

    • Why it’s questionable: Descartes is relying on reason to justify doubting everything else, including the faculties (like reason) he has not yet proven to be trustworthy. This risks begging the question, assuming the reliability of reason while attempting to prove everything else, including reason itself.
    • Type: Begging the question – assuming what you're trying to prove.

    5. Inconsistency (Recognized, Not Fully Resolved)

    "Yet although the senses sometimes deceive us... that doesn’t apply to my belief that I am here, sitting by the fire..."

    • Why it’s a fallacy: Here, Descartes briefly exempts ordinary sensory experiences (like feeling warmth, seeing paper) from radical doubt—despite having just declared that any doubt justifies withholding belief. He later corrects for this in the next meditation (introducing the dream argument), but within this passage, it’s a moment of inconsistency.
    • Type: Special pleading or internal inconsistency.

    ✅ Summary Table of Fallacies

    Fallacy Type Description
    Hasty Generalization Rejects all sensory data due to occasional errors.
    False Analogy Compares unreliable senses to untrustworthy people.
    Slippery Slope Claims that doubting foundations causes collapse of all beliefs.
    Begging the Question Uses reason to justify doubting everything, including reason itself.
    Inconsistency / Special Pleading Makes an exception for immediate sensory beliefs without justification.

    Conclusion:

    Descartes' passage is methodologically rich, but from a logic and reasoning perspective, it contains:

    • Several classic fallacies used to motivate extreme skepticism.
    • Analogical and metaphorical reasoning that may be rhetorically persuasive but logically flawed.
    • Self-referential assumptions that are philosophically problematic (and which he tries to resolve in later meditations).

    Would you like a version of this reasoning revised to be more logically sound, or would you like help analyzing the next part of the Meditations?

    1 Reply Last reply
    1
    • M [email protected]

      I see a lot of misunderstandings in the comments 🫤

      This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

      Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

      T This user is from outside of this forum
      T This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #139

      There's probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

      If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

      1 Reply Last reply
      3
      • communist@lemmy.frozeninferno.xyzC [email protected]

        It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.

        essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they're all transformers trained in a similar way. I would not say they have different architectures.

        0 This user is from outside of this forum
        0 This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #140

        Ah, gotcha

        1 Reply Last reply
        0
        • T [email protected]

          Yeah I often think about this Rick N Morty cartoon. Grifters are like, "We made an AI ankle!!!" And I'm like, "That's not actually something that people with busted ankles want. They just want to walk. No need for a sentient ankle." It's a real gross distortion of science how everything needs to be "AI" nowadays.

          N This user is from outside of this forum
          N This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #141

          If we ever achieved real AI the immediate next thing we would do is learn how to lobotomize it so that we can use it like a standard program or OS, only it would be suffering internally and wishing for death. I hope the basilisk is real, we would deserve it.

          1 Reply Last reply
          1
          • M [email protected]

            Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #142

            This would be a much better paper if it addressed that question in an honest way.

            Instead they just parrot the misleading terminology that they're supposedly debunking.

            How dat collegial boys club undermines science...

            1 Reply Last reply
            1
            • R [email protected]

              What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

              https://en.wikipedia.org/wiki/Reasoning_system

              T This user is from outside of this forum
              T This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #143

              Sure, these grifters are shady AF about their wacky definition of "reason"... But that's just a continuation of the entire "AI" grift.

              1 Reply Last reply
              1
              • M [email protected]

                I think as we approach the uncanny valley of machine intelligence, it's no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

                T This user is from outside of this forum
                T This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #144

                It's just the internet plus some weighted dice. Nothing to be afraid of.

                1 Reply Last reply
                2
                • B [email protected]

                  When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

                  X This user is from outside of this forum
                  X This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #145

                  Intuition is about the only thing it has. It's a statistical system. The problem is it doesn't have logic. We assume because its computer based that it must be more logic oriented but it's the opposite. That's the problem. We can't get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn't mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn't guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

                  This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

                  Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don't and we can't get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

                  S 1 Reply Last reply
                  7
                  • nostradavid@programming.devN [email protected]

                    OK, and? A car doesn't run like a horse either, yet they are still very useful.

                    I'm fine with the distinction between human reasoning and LLM "reasoning".

                    T This user is from outside of this forum
                    T This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #146

                    Cars are horses. How do you feel about statement?

                    1 Reply Last reply
                    0
                    • F [email protected]

                      NOOOOOOOOO

                      SHIIIIIIIIIITT

                      SHEEERRRLOOOOOOCK

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #147

                      The funny thing about this "AI" griftosphere is how grifters will make some outlandish claim and then different grifters will "disprove" it. Plenty of grant/VC money for everybody.

                      1 Reply Last reply
                      0
                      • S [email protected]

                        What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

                        T This user is from outside of this forum
                        T This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #148

                        ICYMI: A.I. is a Religious Cult with Karen Hao

                        1 Reply Last reply
                        0
                        • 1 [email protected]

                          The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"

                          It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.

                          T This user is from outside of this forum
                          T This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #149

                          The difference between reasoning models and normal models is reasoning models are two steps,

                          That's a garbage definition of "reasoning". Someone who is not a grifter would simply call them two-step models (or similar), instead of promoting misleading anthropomorphic terminology.

                          1 Reply Last reply
                          0
                          • M [email protected]

                            Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

                            K This user is from outside of this forum
                            K This user is from outside of this forum
                            [email protected]
                            wrote on last edited by [email protected]
                            #150

                            You were starting a new argument. Let's stay on topic.

                            The paper implies "Reasoning" is application of logic. It shows that LRMs are great at copying logic but can't follow simple instructions that haven't been seen before.

                            1 Reply Last reply
                            1
                            • communist@lemmy.frozeninferno.xyzC [email protected]

                              those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.

                              this proves the issue is widespread, not fundamental.

                              K This user is from outside of this forum
                              K This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #151

                              The architecture of these LRMs may make monkeys fly out of my butt. It hasn't been proven that the architecture doesn't allow it.

                              You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can't.

                              communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                              0
                              • X [email protected]

                                Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I've given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

                                I This user is from outside of this forum
                                I This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #152

                                By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

                                E 1 Reply Last reply
                                3
                                • T [email protected]

                                  Yeah I often think about this Rick N Morty cartoon. Grifters are like, "We made an AI ankle!!!" And I'm like, "That's not actually something that people with busted ankles want. They just want to walk. No need for a sentient ankle." It's a real gross distortion of science how everything needs to be "AI" nowadays.

                                  J This user is from outside of this forum
                                  J This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #153

                                  AI is just the new buzzword, just like blockchain was a while ago. Marketing loves these buzzwords because they can get away with charging more if they use them. They don't much care if their product even has it or could make any use of it.

                                  1 Reply Last reply
                                  0
                                  • M [email protected]

                                    I see a lot of misunderstandings in the comments 🫤

                                    This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

                                    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

                                    A This user is from outside of this forum
                                    A This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #154

                                    Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn't if AI can reason, but how its reasoning differs from ours.

                                    1 Reply Last reply
                                    1
                                    • N [email protected]

                                      Impressive = / = substantial or beneficial.

                                      S This user is from outside of this forum
                                      S This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by [email protected]
                                      #155

                                      These are almost the exact same talking points we used to hear about ‘why would anyone need a home computer?’ Wild how some people can be so consistently short-sighted again and again and again.

                                      What makes you think you’re capable of sentience, when your comments are all cliches and you’re incapable of personal growth or vision or foresight?

                                      N 1 Reply Last reply
                                      0
                                      • I [email protected]

                                        By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

                                        E This user is from outside of this forum
                                        E This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #156

                                        And does it even matter anyway?

                                        For the sake of argument let's say that somebody manages to create an AGI, does it reasoning abilities if it works anyway? No one has proven that sapience is required for intelligence, after all we only have a sample size of one, hardly any conclusions can really be drawn from that.

                                        1 Reply Last reply
                                        0
                                        • N [email protected]

                                          We actually have sentience, though, and are capable of creating new things and having realizations. AI isn’t real and LLMs and dispersion models are simply reiterating algorithmic patterns, no LLM or dispersion model can create anything original or expressive.

                                          Also, we aren’t “evolved primates.” We are just primates, the thing is, primates are the most socially and cognitively evolved species on the planet, so that’s not a denigrating sentiment unless your a pompous condescending little shit.

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by [email protected]
                                          #157

                                          The denigration of simulated thought processes, paired with aggrandizing of wetware processing, is exactly my point. The same self-serving narcissism that’s colored so many biased & flawed arguments in biological philosophy putting humans on a pedestal above all other animals.

                                          It’s also hysterical and ironic that you insist on your own level of higher thinking, as you regurgitate an argument so unoriginal that a bot could’ve easily written it. Just absolutely no self-awareness.

                                          N 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups