Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Scheduled Pinned Locked Moved Technology
technology
210 Posts 93 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M [email protected]

    I'm not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

    If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

    K This user is from outside of this forum
    K This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #107

    Sure. We weren't discussing if AI creates value or not. If you ask a different question then you get a different answer.

    M 1 Reply Last reply
    3
    • K [email protected]

      By that metric, you can argue Kasparov isn’t thinking during chess

      Kasparov's thinking fits pretty much all biological definitions of thinking. Which is the entire point.

      L This user is from outside of this forum
      L This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #108

      Is thinking necessarily biologic?

      1 Reply Last reply
      1
      • E [email protected]

        LLMs deal with tokens. Essentially, predicting a series of bytes.

        Humans do much, much, much, much, much, much, much more than that.

        Z This user is from outside of this forum
        Z This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #109

        No. They don't. We just call them proteins.

        S E 2 Replies Last reply
        0
        • A [email protected]

          LOOK MAA I AM ON FRONT PAGE

          softestsapphic@lemmy.worldS This user is from outside of this forum
          softestsapphic@lemmy.worldS This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #110

          Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

          Z T B A 4 Replies Last reply
          68
          • nostradavid@programming.devN [email protected]

            OK, and? A car doesn't run like a horse either, yet they are still very useful.

            I'm fine with the distinction between human reasoning and LLM "reasoning".

            F This user is from outside of this forum
            F This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #111

            The guy selling the car doesn't tell you it runs like a horse, the guy selling you AI is telling you it has reasoning skills. AI absolutely has utility, the guys making it are saying it's utility is nearly limitless because Tesla has demonstrated there's no actual penalty for lying to investors.

            1 Reply Last reply
            8
            • K [email protected]

              Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

              zacryon@feddit.orgZ This user is from outside of this forum
              zacryon@feddit.orgZ This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #112

              Ragebait?

              I'm in robotics and find plenty of use for ML methods. Think of image classifiers, how do you want to approach that without oversimplified problem settings?
              Or even in control or coordination problems, which can sometimes become NP-hard. Even though not optimal, ML methods are quite solid in learning patterns of highly dimensional NP hard problem settings, often outperforming hand-crafted conventional suboptimal solvers in computation effort vs solution quality analysis, especially outperforming (asymptotically) optimal solvers time-wise, even though not with optimal solutions (but "good enough" nevertheless). (Ok to be fair suboptimal solvers do that as well, but since ML methods can outperform these, I see it as an attractive middle-ground.)

              1 Reply Last reply
              1
              • softestsapphic@lemmy.worldS [email protected]

                Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

                Z This user is from outside of this forum
                Z This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #113

                This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

                1 Reply Last reply
                13
                • R [email protected]

                  What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

                  https://en.wikipedia.org/wiki/Reasoning_system

                  S This user is from outside of this forum
                  S This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #114

                  If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It's like comparing PhD reasoning to a dog's reasoning.

                  While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

                  Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it's designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don't have the tech to make a synthetic human.

                  1 Reply Last reply
                  3
                  • K [email protected]

                    Not "This particular model". Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.

                    The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.

                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    communist@lemmy.frozeninferno.xyzC This user is from outside of this forum
                    [email protected]
                    wrote on last edited by [email protected]
                    #115

                    those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.

                    this proves the issue is widespread, not fundamental.

                    0 K 2 Replies Last reply
                    0
                    • Z [email protected]

                      No. They don't. We just call them proteins.

                      S This user is from outside of this forum
                      S This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #116

                      You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek's Four Humours theory.

                      1 Reply Last reply
                      1
                      • Z [email protected]

                        No. They don't. We just call them proteins.

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #117

                        "They".

                        What are you?

                        1 Reply Last reply
                        0
                        • S [email protected]

                          That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

                          E This user is from outside of this forum
                          E This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #118

                          I didn't say we aren't animals or that we don't follow physics rules.

                          But what you're saying is the equivalent of "everything that goes up will eventually go down - that's how physics works and you don't see that, you're in denial!!!11!!!1"

                          1 Reply Last reply
                          0
                          • C [email protected]

                            Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

                            E This user is from outside of this forum
                            E This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #119

                            I mean… “proving” is also just marketing speak. There is no clear definition of reasoning, so there’s also no way to prove or disprove that something/someone reasons.

                            C 1 Reply Last reply
                            1
                            • C [email protected]

                              While a fair idea there are two issues with that even still - Hallucinations and the cost of running the models.

                              Unfortunately, it take significant compute resources to perform even simple responses, and these responses can be totally made up, but still made to look completely real. It's gotten much better sure, but blindly trusting these things (Which many people do) can have serious consequences.

                              M This user is from outside of this forum
                              M This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #120

                              Hallucinations and the cost of running the models.

                              So, inaccurate information in books is nothing new. Agreed that the rate of hallucinations needs to decline, a lot, but there has always been a need for a veracity filter - just because it comes from "a book" or "the TV" has never been an indication of absolute truth, even though many people stop there and assume it is. In other words: blind trust is not a new problem.

                              The cost of running the models is an interesting one - how does it compare with publication on paper to ship globally to store in environmentally controlled libraries which require individuals to physically travel to/from the libraries to access the information? What's the price of the resulting increased ignorance of the general population due to the high cost of information access?

                              What good is a bunch of knowledge stuck behind a search engine when people don't know how to access it, or access it efficiently?

                              Granted, search engines already take us 95% (IMO) of the way from paper libraries to what AI is almost succeeding in being today, but ease of access of information has tremendous value - and developing ways to easily access the information available on the internet is a very valuable endeavor.

                              Personally, I feel more emphasis should be put on establishing the veracity of the information before we go making all the garbage easier to find.

                              I also worry that "easy access" to automated interpretation services is going to lead to a bunch of information encoded in languages that most people don't know because they're dependent on machines to do the translation for them. As an example: shiny new computer language comes out but software developer is too lazy to learn it, developer uses AI to write code in the new language instead...

                              1 Reply Last reply
                              0
                              • K [email protected]

                                Sure. We weren't discussing if AI creates value or not. If you ask a different question then you get a different answer.

                                M This user is from outside of this forum
                                M This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #121

                                Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

                                T K 2 Replies Last reply
                                1
                                • B [email protected]

                                  When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #122

                                  I agree with you. In its current state, LLM is not sentient, and thus not "Intelligence".

                                  M 1 Reply Last reply
                                  2
                                  • B [email protected]

                                    When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

                                    J This user is from outside of this forum
                                    J This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #123

                                    And that's pretty damn useful, but obnoxious to have expectations wildly set incorrectly.

                                    1 Reply Last reply
                                    0
                                    • communist@lemmy.frozeninferno.xyzC [email protected]

                                      those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.

                                      this proves the issue is widespread, not fundamental.

                                      0 This user is from outside of this forum
                                      0 This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #124

                                      Is "model" not defined as architecture+weights? Those models certainly don't share the same architecture. I might just be confused about your point though

                                      communist@lemmy.frozeninferno.xyzC 1 Reply Last reply
                                      0
                                      • B [email protected]

                                        When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

                                        N This user is from outside of this forum
                                        N This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #125

                                        People think they want AI, but they don’t even know what AI is on a conceptual level.

                                        T B 2 Replies Last reply
                                        3
                                        • S [email protected]

                                          Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

                                          N This user is from outside of this forum
                                          N This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #126

                                          We actually have sentience, though, and are capable of creating new things and having realizations. AI isn’t real and LLMs and dispersion models are simply reiterating algorithmic patterns, no LLM or dispersion model can create anything original or expressive.

                                          Also, we aren’t “evolved primates.” We are just primates, the thing is, primates are the most socially and cognitively evolved species on the planet, so that’s not a denigrating sentiment unless your a pompous condescending little shit.

                                          S 1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups