Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Can we trust LLM CALCULATIONS?.

Can we trust LLM CALCULATIONS?.

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
69 Posts 48 Posters 3 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

    msmc101@lemmy.blahaj.zoneM This user is from outside of this forum
    msmc101@lemmy.blahaj.zoneM This user is from outside of this forum
    [email protected]
    wrote last edited by
    #42

    no, LLM's are designed to drive up user engagement nothing else, it's programmed to present what you want to hear not actual facts. plus it's straight up not designed to do math

    1 Reply Last reply
    0
    • U [email protected]

      The whole "two r's in strawberry" thing is enough of an argument for me. If things like that happen at such a low level, its completely impossible that it wont make mistakes with problems that are exponentially more complicated than that.

      O This user is from outside of this forum
      O This user is from outside of this forum
      [email protected]
      wrote last edited by
      #43

      The problem with that is that it isn't actually counting the R's.

      You'd probably have better luck asking it to write a script for you that returns the number of instances of a letter in a string of text, then getting it to explain to you how to get it running and how it works. You'd get the answer that way, and also then have a script that could count almost any character and text of almost any size.

      That's much more complicated, impressive, and useful, imo.

      1 Reply Last reply
      6
      • M [email protected]

        LLMs don't and can't do math. They don't calculate anything, that's just not how they work. Instead, they do this:

        2 + 2 = ? What comes after that? Oh, I remember! It's '4'!

        It could be right, it could be wrong. If there's enough pattern in the training data, it could remember the correct answer. Otherwise it'll just place a plausible looking value there (behavior known as AI hallucination). So, you can not "trust" it.

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote last edited by
        #44

        Every LLM answer is a hallucination.

        C 1 Reply Last reply
        6
        • F [email protected]

          Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote last edited by
          #45

          L-L-Mentalist!

          1 Reply Last reply
          0
          • F [email protected]

            Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

            C This user is from outside of this forum
            C This user is from outside of this forum
            [email protected]
            wrote last edited by
            #46

            Maybe? I'd be looking all over for some convergent way to fuck it up, though.

            If it's just one model or the answers are only close, lol no.

            1 Reply Last reply
            1
            • M [email protected]

              Every LLM answer is a hallucination.

              C This user is from outside of this forum
              C This user is from outside of this forum
              [email protected]
              wrote last edited by
              #47

              Some are just realistic to the point of being correct. It frightens me how many users have no idea about any of that.

              1 Reply Last reply
              5
              • S [email protected]

                short answer: no.

                Long Answer: They are still (mostly) statisics based and can't do real math. You can use the answers from LLMs as starting point, but you have to rigerously verify the answers they give.

                confuser@lemmy.zipC This user is from outside of this forum
                confuser@lemmy.zipC This user is from outside of this forum
                [email protected]
                wrote last edited by
                #48

                A calculator as a tool to a llm though, that works, at least mostly, and could be better when kinks get worked out.

                1 Reply Last reply
                1
                • G This user is from outside of this forum
                  G This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #49

                  Finally an intelligent comment. So many comments in here that don't realize most LLM's are bundled with calculators that just do the math.

                  facedeer@fedia.ioF 1 Reply Last reply
                  1
                  • F [email protected]

                    Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                    Q This user is from outside of this forum
                    Q This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #50

                    Most LLM's now call functions in the background. Most calculations are just simple Python expressions.

                    F 1 Reply Last reply
                    3
                    • F [email protected]

                      Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                      deathbybigsad@sh.itjust.worksD This user is from outside of this forum
                      deathbybigsad@sh.itjust.worksD This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #51

                      Yes, with absolute certainty.

                      For example: 2 + 2 = 5

                      It's absolutely correct and if you dispute it, big bro is gonna have to re-educated you on that.

                      F 1 Reply Last reply
                      3
                      • deathbybigsad@sh.itjust.worksD [email protected]

                        Yes, with absolute certainty.

                        For example: 2 + 2 = 5

                        It's absolutely correct and if you dispute it, big bro is gonna have to re-educated you on that.

                        F This user is from outside of this forum
                        F This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #52

                        I NEED TO consult every LLM VIA TELEKINESIS QUANTUM ELECTRIC GRAVITY A AND B WAVE.

                        1 Reply Last reply
                        0
                        • Q [email protected]

                          Most LLM's now call functions in the background. Most calculations are just simple Python expressions.

                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #53

                          Yes. I was aware of that, but I was manipulated by an analog device

                          1 Reply Last reply
                          0
                          • F [email protected]

                            Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                            indigomoontrue@lemmy.worldI This user is from outside of this forum
                            indigomoontrue@lemmy.worldI This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #54

                            I test my local LLM for the first few weeks. Every "answer" it gives me i still look up elsewhere to confirm. Then when i see it is accurate, I still will check every 10 or so questions just to make sure. Unfortunately, I feel like they are making search engines worst on purpose so that the A.i. or in this case server or local LLMs can replace them. This is the sweet spot. I wouldn't advise getting any newer LLM's that will come out in the next few months (next generation).

                            1 Reply Last reply
                            0
                            • F [email protected]

                              Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                              typewar@infosec.pubT This user is from outside of this forum
                              typewar@infosec.pubT This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #55

                              No because there is randomness involved

                              1 1 Reply Last reply
                              0
                              • F [email protected]

                                Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                                gedaliyah@lemmy.worldG This user is from outside of this forum
                                gedaliyah@lemmy.worldG This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #56

                                Here's an interesting post that gives a pretty good quick summary of when an LLM may be a good tool.

                                Here's one key:

                                Machine learning is amazing if:

                                • The problem is too hard to write a rule-based system for or the requirements change sufficiently quickly that it isn't worth writing such a thing and,
                                • The value of a correct answer is much higher than the cost of an incorrect answer.

                                The second of these is really important.

                                So if your math problem is unsolvable by conventional tools, or sufficiently complex that designing an expression is more effort than the answer is worth... AND ALSO it's more valuable to have an answer than it is to have a correct answer (there is no real cost for being wrong), THEN go ahead and trust it.

                                If it is important that the answer is correct, or if another tool can be used, then you're better off without the LLM.

                                The bottom line is that the LLM is not making a calculation. It could end up with the right answer. Different models could end up with the same answer. It's very unclear how much underlying technology is shared between models anyway.

                                For example, if the problem is something like, "here is all of our sales data and market indicators for the past 5 years. Project how much of each product we should stock in the next quarter. " Sure, an LLM may be appropriately close to a professional analysis.

                                If the problem is like "given these bridge schematics, what grade steel do we need in the central pylon?" Then, well, you are probably going to be testifying in front of congress one day.

                                1 Reply Last reply
                                11
                                • typewar@infosec.pubT [email protected]

                                  No because there is randomness involved

                                  1 This user is from outside of this forum
                                  1 This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #57

                                  That's why you ask 6 of them, and of they all come to the same conclusion then chances are it's either right, or a common pitfall.

                                  1 Reply Last reply
                                  1
                                  • F [email protected]

                                    Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

                                    T This user is from outside of this forum
                                    T This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #58

                                    I mean, I don't know why you wouldn't just use something other than an LLM in that case

                                    1 Reply Last reply
                                    0
                                    • M [email protected]

                                      LLMs don't and can't do math. They don't calculate anything, that's just not how they work. Instead, they do this:

                                      2 + 2 = ? What comes after that? Oh, I remember! It's '4'!

                                      It could be right, it could be wrong. If there's enough pattern in the training data, it could remember the correct answer. Otherwise it'll just place a plausible looking value there (behavior known as AI hallucination). So, you can not "trust" it.

                                      greg@lemmy.caG This user is from outside of this forum
                                      greg@lemmy.caG This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #59

                                      They don’t calculate anything

                                      They calculate the statistical probability of the next token in an array of previous tokens

                                      1 Reply Last reply
                                      0
                                      • F [email protected]

                                        I did dozens of times. Same calculations.

                                        P This user is from outside of this forum
                                        P This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #60

                                        That doesn't change the logic I gave

                                        1 Reply Last reply
                                        0
                                        • M [email protected]

                                          LLMs don't and can't do math. They don't calculate anything, that's just not how they work. Instead, they do this:

                                          2 + 2 = ? What comes after that? Oh, I remember! It's '4'!

                                          It could be right, it could be wrong. If there's enough pattern in the training data, it could remember the correct answer. Otherwise it'll just place a plausible looking value there (behavior known as AI hallucination). So, you can not "trust" it.

                                          N This user is from outside of this forum
                                          N This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by [email protected]
                                          #61

                                          A good one will interpret what you are asking and then write code, often python I notice, and then let that do the math and return the answer. A math problem should use a math engine and that's how it gets around it.

                                          But really why bother, go ask wolfram alpha or just write the math problem in code yourself.

                                          1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups