Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Scheduled Pinned Locked Moved Technology
technology
210 Posts 93 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

    E This user is from outside of this forum
    E This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #17

    TBH idk how people can convince themselves otherwise.

    They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

    L 1 Reply Last reply
    0
    • johnedwa@sopuli.xyzJ [email protected]

      "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
      It's called the AI Effect.

      As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

      V This user is from outside of this forum
      V This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #18

      Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.

      Any reasoning human would have understood that question to be referring to the tension in the strings.

      Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.

      Once again a reasoning human would assume the question is about the mineral.

      Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.

      xthexder@l.sw0.comX 1 Reply Last reply
      0
      • A [email protected]

        LOOK MAA I AM ON FRONT PAGE

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #19

        You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

        P 1 Reply Last reply
        4
        • M [email protected]

          Literally what I'm talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can't even see you're making my argument for me.

          A This user is from outside of this forum
          A This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #20

          That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that's actually supposed to mean).

          M 1 Reply Last reply
          0
          • S [email protected]

            You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

            P This user is from outside of this forum
            P This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #21

            Maybe you failed all your high school classes, but that ain't got none to do with me.

            S 1 Reply Last reply
            4
            • A [email protected]

              That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that's actually supposed to mean).

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #22

              What isn't there to gain?

              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

              A 1 Reply Last reply
              0
              • V [email protected]

                Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.

                Any reasoning human would have understood that question to be referring to the tension in the strings.

                Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.

                Once again a reasoning human would assume the question is about the mineral.

                Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.

                xthexder@l.sw0.comX This user is from outside of this forum
                xthexder@l.sw0.comX This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #23

                I'm not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I'd expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

                This kind of just goes to show there's multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely "good" at making up answers to leading questions, even if it's completely false.

                johnedwa@sopuli.xyzJ K 2 Replies Last reply
                0
                • A [email protected]

                  LOOK MAA I AM ON FRONT PAGE

                  Z This user is from outside of this forum
                  Z This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #24

                  Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

                  T 1 Reply Last reply
                  2
                  • A [email protected]

                    LOOK MAA I AM ON FRONT PAGE

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #25

                    Of course, that is obvious to all having basic knowledge of neural networks, no?

                    endmaker@ani.socialE 1 Reply Last reply
                    1
                    • P [email protected]

                      Maybe you failed all your high school classes, but that ain't got none to do with me.

                      S This user is from outside of this forum
                      S This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #26

                      Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

                      I E N 3 Replies Last reply
                      2
                      • N [email protected]

                        lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

                        C This user is from outside of this forum
                        C This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #27

                        Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

                        E 1 Reply Last reply
                        1
                        • G [email protected]

                          No, it shows how certain people misunderstand the meaning of the word.

                          You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

                          C This user is from outside of this forum
                          C This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #28

                          Intellegence has a very clear definition.

                          It's requires the ability to acquire knowledge, understand knowledge and use knowledge.

                          No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.

                          8 G 2 Replies Last reply
                          1
                          • A [email protected]

                            LOOK MAA I AM ON FRONT PAGE

                            G This user is from outside of this forum
                            G This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #29

                            Most humans don't reason. They just parrot shit too. The design is very human.

                            spacecowboy@lemmy.caS E joel_feila@lemmy.worldJ S 4 Replies Last reply
                            22
                            • M [email protected]

                              What isn't there to gain?

                              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

                              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

                              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

                              A This user is from outside of this forum
                              A This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #30

                              I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

                              Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                              That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

                              Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

                              For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

                              M 1 Reply Last reply
                              0
                              • A [email protected]

                                I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

                                Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                                That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

                                Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

                                For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

                                M This user is from outside of this forum
                                M This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #31

                                Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

                                You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


                                1. Straw Man Fallacy

                                "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

                                This misrepresents the original claim:

                                "AI can help create a framework at the very least so they can get their ideas down."

                                The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                                1. False Dichotomy

                                "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

                                This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                                1. Hasty Generalization

                                "Supposed 'brilliant ideas' are a dime a dozen..."

                                While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


                                1. Appeal to Ridicule / Ad Hominem (Light)

                                "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

                                Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


                                1. Tu Quoque / Whataboutism (Borderline)

                                "For now I see no particular benefits that the right-wing has obtained by using AI either..."

                                This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                                Summary of Fallacies Identified:

                                Type Description

                                Straw Man Misrepresents the role of AI in creative assistance.
                                False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
                                Hasty Generalization Devalues “brilliant ideas” universally.
                                Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
                                Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                                Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                                At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

                                D A 2 Replies Last reply
                                0
                                • G [email protected]

                                  Most humans don't reason. They just parrot shit too. The design is very human.

                                  spacecowboy@lemmy.caS This user is from outside of this forum
                                  spacecowboy@lemmy.caS This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #32

                                  Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

                                  C Z J 3 Replies Last reply
                                  2
                                  • M [email protected]

                                    Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

                                    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


                                    1. Straw Man Fallacy

                                    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

                                    This misrepresents the original claim:

                                    "AI can help create a framework at the very least so they can get their ideas down."

                                    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                                    1. False Dichotomy

                                    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

                                    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                                    1. Hasty Generalization

                                    "Supposed 'brilliant ideas' are a dime a dozen..."

                                    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


                                    1. Appeal to Ridicule / Ad Hominem (Light)

                                    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

                                    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


                                    1. Tu Quoque / Whataboutism (Borderline)

                                    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

                                    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                                    Summary of Fallacies Identified:

                                    Type Description

                                    Straw Man Misrepresents the role of AI in creative assistance.
                                    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
                                    Hasty Generalization Devalues “brilliant ideas” universally.
                                    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
                                    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                                    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                                    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

                                    D This user is from outside of this forum
                                    D This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #33

                                    Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

                                    This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

                                    M 1 Reply Last reply
                                    3
                                    • M [email protected]

                                      Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

                                      You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


                                      1. Straw Man Fallacy

                                      "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

                                      This misrepresents the original claim:

                                      "AI can help create a framework at the very least so they can get their ideas down."

                                      The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                                      1. False Dichotomy

                                      "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

                                      This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                                      1. Hasty Generalization

                                      "Supposed 'brilliant ideas' are a dime a dozen..."

                                      While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


                                      1. Appeal to Ridicule / Ad Hominem (Light)

                                      "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

                                      Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


                                      1. Tu Quoque / Whataboutism (Borderline)

                                      "For now I see no particular benefits that the right-wing has obtained by using AI either..."

                                      This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                                      Summary of Fallacies Identified:

                                      Type Description

                                      Straw Man Misrepresents the role of AI in creative assistance.
                                      False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
                                      Hasty Generalization Devalues “brilliant ideas” universally.
                                      Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
                                      Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                                      Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                                      At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

                                      A This user is from outside of this forum
                                      A This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #34

                                      Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

                                      It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

                                      Saying it’ll be boring comics missed the entire point.

                                      So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

                                      Saying it is the same as google is pure ignorance of what it can do.

                                      I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

                                      The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

                                      "They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

                                      But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

                                      M 1 Reply Last reply
                                      3
                                      • A [email protected]

                                        LOOK MAA I AM ON FRONT PAGE

                                        I This user is from outside of this forum
                                        I This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #35

                                        Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

                                        But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

                                        I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

                                        N C 2 Replies Last reply
                                        9
                                        • S [email protected]

                                          Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

                                          I This user is from outside of this forum
                                          I This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #36

                                          I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups