Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Scheduled Pinned Locked Moved Technology
technology
210 Posts 93 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M [email protected]

    I think as we approach the uncanny valley of machine intelligence, it's no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

    T This user is from outside of this forum
    T This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #144

    It's just the internet plus some weighted dice. Nothing to be afraid of.

    1 Reply Last reply
    2
    • B [email protected]

      When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

      X This user is from outside of this forum
      X This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #145

      Intuition is about the only thing it has. It's a statistical system. The problem is it doesn't have logic. We assume because its computer based that it must be more logic oriented but it's the opposite. That's the problem. We can't get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn't mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn't guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

      This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

      Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don't and we can't get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

      S 1 Reply Last reply
      7
      • nostradavid@programming.devN [email protected]

        OK, and? A car doesn't run like a horse either, yet they are still very useful.

        I'm fine with the distinction between human reasoning and LLM "reasoning".

        T This user is from outside of this forum
        T This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #146

        Cars are horses. How do you feel about statement?

        1 Reply Last reply
        0
        • F [email protected]

          NOOOOOOOOO

          SHIIIIIIIIIITT

          SHEEERRRLOOOOOOCK

          T This user is from outside of this forum
          T This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #147

          The funny thing about this "AI" griftosphere is how grifters will make some outlandish claim and then different grifters will "disprove" it. Plenty of grant/VC money for everybody.

          1 Reply Last reply
          0
          • S [email protected]

            What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #148

            ICYMI: A.I. is a Religious Cult with Karen Hao

            1 Reply Last reply
            0
            • 1 [email protected]

              The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"

              It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.

              T This user is from outside of this forum
              T This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #149

              The difference between reasoning models and normal models is reasoning models are two steps,

              That's a garbage definition of "reasoning". Someone who is not a grifter would simply call them two-step models (or similar), instead of promoting misleading anthropomorphic terminology.

              1 Reply Last reply
              0
              • M [email protected]

                Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

                K This user is from outside of this forum
                K This user is from outside of this forum
                [email protected]
                wrote on last edited by [email protected]
                #150

                You were starting a new argument. Let's stay on topic.

                The paper implies "Reasoning" is application of logic. It shows that LRMs are great at copying logic but can't follow simple instructions that haven't been seen before.

                1 Reply Last reply
                1
                • communist@lemmy.frozeninferno.xyzC [email protected]

                  those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.

                  this proves the issue is widespread, not fundamental.

                  K This user is from outside of this forum
                  K This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #151

                  The architecture of these LRMs may make monkeys fly out of my butt. It hasn't been proven that the architecture doesn't allow it.

                  You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can't.

                  communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                  0
                  • X [email protected]

                    Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I've given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

                    I This user is from outside of this forum
                    I This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #152

                    By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

                    E 1 Reply Last reply
                    3
                    • T [email protected]

                      Yeah I often think about this Rick N Morty cartoon. Grifters are like, "We made an AI ankle!!!" And I'm like, "That's not actually something that people with busted ankles want. They just want to walk. No need for a sentient ankle." It's a real gross distortion of science how everything needs to be "AI" nowadays.

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #153

                      AI is just the new buzzword, just like blockchain was a while ago. Marketing loves these buzzwords because they can get away with charging more if they use them. They don't much care if their product even has it or could make any use of it.

                      1 Reply Last reply
                      0
                      • M [email protected]

                        I see a lot of misunderstandings in the comments 🫤

                        This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

                        Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

                        A This user is from outside of this forum
                        A This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #154

                        Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn't if AI can reason, but how its reasoning differs from ours.

                        1 Reply Last reply
                        1
                        • N [email protected]

                          Impressive = / = substantial or beneficial.

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #155

                          These are almost the exact same talking points we used to hear about ‘why would anyone need a home computer?’ Wild how some people can be so consistently short-sighted again and again and again.

                          What makes you think you’re capable of sentience, when your comments are all cliches and you’re incapable of personal growth or vision or foresight?

                          N 1 Reply Last reply
                          0
                          • I [email protected]

                            By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

                            E This user is from outside of this forum
                            E This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #156

                            And does it even matter anyway?

                            For the sake of argument let's say that somebody manages to create an AGI, does it reasoning abilities if it works anyway? No one has proven that sapience is required for intelligence, after all we only have a sample size of one, hardly any conclusions can really be drawn from that.

                            1 Reply Last reply
                            0
                            • N [email protected]

                              We actually have sentience, though, and are capable of creating new things and having realizations. AI isn’t real and LLMs and dispersion models are simply reiterating algorithmic patterns, no LLM or dispersion model can create anything original or expressive.

                              Also, we aren’t “evolved primates.” We are just primates, the thing is, primates are the most socially and cognitively evolved species on the planet, so that’s not a denigrating sentiment unless your a pompous condescending little shit.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #157

                              The denigration of simulated thought processes, paired with aggrandizing of wetware processing, is exactly my point. The same self-serving narcissism that’s colored so many biased & flawed arguments in biological philosophy putting humans on a pedestal above all other animals.

                              It’s also hysterical and ironic that you insist on your own level of higher thinking, as you regurgitate an argument so unoriginal that a bot could’ve easily written it. Just absolutely no self-awareness.

                              N 1 Reply Last reply
                              0
                              • S [email protected]

                                These are almost the exact same talking points we used to hear about ‘why would anyone need a home computer?’ Wild how some people can be so consistently short-sighted again and again and again.

                                What makes you think you’re capable of sentience, when your comments are all cliches and you’re incapable of personal growth or vision or foresight?

                                N This user is from outside of this forum
                                N This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #158

                                What makes you think you’re capable of sentience when you’re asking machines to literally think for you?

                                S 1 Reply Last reply
                                0
                                • S [email protected]

                                  The denigration of simulated thought processes, paired with aggrandizing of wetware processing, is exactly my point. The same self-serving narcissism that’s colored so many biased & flawed arguments in biological philosophy putting humans on a pedestal above all other animals.

                                  It’s also hysterical and ironic that you insist on your own level of higher thinking, as you regurgitate an argument so unoriginal that a bot could’ve easily written it. Just absolutely no self-awareness.

                                  N This user is from outside of this forum
                                  N This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #159

                                  It’s not higher thinking, it’s just actual thinking. Computers are not capable of that and never will be. It’s not a level of fighting progress, or whatever you are trying to get at, it’s just a realistic understanding of computers and technology. You’re jerking off a pipe dream, you don’t even understand how the technology you’re talking about works, and calling a brain “wetware” perfectly outlines that. You’re working on a script writers level of understanding how computers, hardware, and software work. You lack the grasp to even know what you’re talking about, this isn’t Johnny Mnemonic.

                                  S 1 Reply Last reply
                                  1
                                  • N [email protected]

                                    What makes you think you’re capable of sentience when you’re asking machines to literally think for you?

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #160

                                    LoL. Am I less sentient for using a calculator?

                                    You’re astoundingly confident in your own sentience, for someone who seems to struggle to form an original thought. It’s like the convo was lifted straight out of that I, Robot interrogation scene. You hold the machines to standards you can’t meet yourself.

                                    N 1 Reply Last reply
                                    0
                                    • S [email protected]

                                      LoL. Am I less sentient for using a calculator?

                                      You’re astoundingly confident in your own sentience, for someone who seems to struggle to form an original thought. It’s like the convo was lifted straight out of that I, Robot interrogation scene. You hold the machines to standards you can’t meet yourself.

                                      N This user is from outside of this forum
                                      N This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by [email protected]
                                      #161

                                      Funny you should use that example, I am actually a musician and composer, so yes. You’ve proved nothing other than your own assumptions that everyone else is as limited in their ability to create, learn, and express themselves as you are. I’m not looking for a crutch, and you’re using a work of intentionally flawed fictional logic to attempt to make a point. The point you’ve established is you live in a fantasy world, but you don’t understand that because it involves computers.

                                      S 1 Reply Last reply
                                      0
                                      • E [email protected]

                                        I mean… “proving” is also just marketing speak. There is no clear definition of reasoning, so there’s also no way to prove or disprove that something/someone reasons.

                                        C This user is from outside of this forum
                                        C This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #162

                                        Claiming it's just marketing fluff is indicates you do not know what you're talking about.

                                        They published a research paper on it. You are free to publish your own paper disproving theirs.

                                        At the moment, you sound like one of those "I did my own research" people except you didn't even bother doing your own research.

                                        E 1 Reply Last reply
                                        0
                                        • N [email protected]

                                          It’s not higher thinking, it’s just actual thinking. Computers are not capable of that and never will be. It’s not a level of fighting progress, or whatever you are trying to get at, it’s just a realistic understanding of computers and technology. You’re jerking off a pipe dream, you don’t even understand how the technology you’re talking about works, and calling a brain “wetware” perfectly outlines that. You’re working on a script writers level of understanding how computers, hardware, and software work. You lack the grasp to even know what you’re talking about, this isn’t Johnny Mnemonic.

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #163

                                          I call the brain “wetware” because there are companies already working with living neurons to be integrated into AI processing, and it’s an actual industry term.

                                          That you so confidently declare machines will never be capable of processes we haven’t even been able to clearly define ourselves, paired with your almost religious fervor in opposition to its existence, really speaks to where you’re coming from on this. This isn’t coming from an academic perspective. This is clearly personal for you.

                                          N 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups