Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. AI chatbots unable to accurately summarise news, BBC finds

AI chatbots unable to accurately summarise news, BBC finds

Scheduled Pinned Locked Moved Technology
technology
133 Posts 74 Posters 1.2k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • misk@sopuli.xyzM This user is from outside of this forum
    misk@sopuli.xyzM This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    This post did not contain any content.
    db0@lemmy.dbzer0.comD tal@lemmy.todayT S ininewcrow@lemmy.caI B 26 Replies Last reply
    1
    0
    • System shared this topic on
    • misk@sopuli.xyzM [email protected]
      This post did not contain any content.
      db0@lemmy.dbzer0.comD This user is from outside of this forum
      db0@lemmy.dbzer0.comD This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

      E 1 K kat@orbi.campK 4 Replies Last reply
      0
      • misk@sopuli.xyzM [email protected]
        This post did not contain any content.
        tal@lemmy.todayT This user is from outside of this forum
        tal@lemmy.todayT This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a point in their favor.

        jackgreenearth@lemm.eeJ johnedwa@sopuli.xyzJ 2 Replies Last reply
        0
        • tal@lemmy.todayT [email protected]

          They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a point in their favor.

          jackgreenearth@lemm.eeJ This user is from outside of this forum
          jackgreenearth@lemm.eeJ This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          Surely you'd need TTS for that one, too? Which one do you use, is it open weights?

          B 1 Reply Last reply
          0
          • misk@sopuli.xyzM [email protected]
            This post did not contain any content.
            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            BBC finds lol. No, we slresdy knew about that

            1 Reply Last reply
            0
            • misk@sopuli.xyzM [email protected]
              This post did not contain any content.
              ininewcrow@lemmy.caI This user is from outside of this forum
              ininewcrow@lemmy.caI This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              The owners of LLMs don't care about 'accurate' ... they care about 'fast' and 'summary' ... and especially 'profit' and 'monetization'.

              As long as it's quick, delivers instant content and makes money for someone ... no one cares about 'accurate'

              E 1 Reply Last reply
              0
              • misk@sopuli.xyzM [email protected]
                This post did not contain any content.
                B This user is from outside of this forum
                B This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                What temperature and sampling settings? Which models?

                I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

                I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

                My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

                paraphrand@lemmy.worldP E 1 jrs100000@lemmy.worldJ M 5 Replies Last reply
                0
                • misk@sopuli.xyzM [email protected]
                  This post did not contain any content.
                  chemical_cutthroat@lemmy.worldC This user is from outside of this forum
                  chemical_cutthroat@lemmy.worldC This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  Which is hilarious, because most of the shit out there today seems to be written by them.

                  1 Reply Last reply
                  0
                  • jackgreenearth@lemm.eeJ [email protected]

                    Surely you'd need TTS for that one, too? Which one do you use, is it open weights?

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9

                    Zephyra just came out, seems sick:

                    https://huggingface.co/Zyphra

                    There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.

                    A 1 Reply Last reply
                    0
                    • db0@lemmy.dbzer0.comD [email protected]

                      As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

                      E This user is from outside of this forum
                      E This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

                      A 1 Reply Last reply
                      0
                      • ininewcrow@lemmy.caI [email protected]

                        The owners of LLMs don't care about 'accurate' ... they care about 'fast' and 'summary' ... and especially 'profit' and 'monetization'.

                        As long as it's quick, delivers instant content and makes money for someone ... no one cares about 'accurate'

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        Especially after the open source release of DeepSeak... What...?

                        1 Reply Last reply
                        0
                        • B [email protected]

                          What temperature and sampling settings? Which models?

                          I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

                          I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

                          My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

                          paraphrand@lemmy.worldP This user is from outside of this forum
                          paraphrand@lemmy.worldP This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #12

                          I don’t think giving the temperature knob to end users is the answer.

                          Turning it to max for max correctness and low creativity won’t work in an intuitive way.

                          Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

                          Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”

                          Not everyone is an engineer. Temp is an obtuse thing.

                          E B 2 Replies Last reply
                          0
                          • B [email protected]

                            What temperature and sampling settings? Which models?

                            I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

                            I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

                            My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

                            E This user is from outside of this forum
                            E This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #13

                            Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".

                            B N H 3 Replies Last reply
                            0
                            • paraphrand@lemmy.worldP [email protected]

                              I don’t think giving the temperature knob to end users is the answer.

                              Turning it to max for max correctness and low creativity won’t work in an intuitive way.

                              Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

                              Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”

                              Not everyone is an engineer. Temp is an obtuse thing.

                              E This user is from outside of this forum
                              E This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #14

                              This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

                              B 1 Reply Last reply
                              0
                              • misk@sopuli.xyzM [email protected]
                                This post did not contain any content.
                                paradox@lemdro.idP This user is from outside of this forum
                                paradox@lemdro.idP This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #15

                                Funny, I find the BBC unable to accurately convey the news

                                addie@feddit.ukA bilb@lem.monsterB 2 Replies Last reply
                                0
                                • misk@sopuli.xyzM [email protected]
                                  This post did not contain any content.
                                  B This user is from outside of this forum
                                  B This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #16

                                  Why, where they trained using MAIN STREAM NEWS? That could explain it.

                                  1 Reply Last reply
                                  0
                                  • misk@sopuli.xyzM [email protected]
                                    This post did not contain any content.
                                    M This user is from outside of this forum
                                    M This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #17

                                    Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

                                    1 Reply Last reply
                                    0
                                    • paraphrand@lemmy.worldP [email protected]

                                      I don’t think giving the temperature knob to end users is the answer.

                                      Turning it to max for max correctness and low creativity won’t work in an intuitive way.

                                      Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

                                      Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”

                                      Not everyone is an engineer. Temp is an obtuse thing.

                                      B This user is from outside of this forum
                                      B This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #18
                                      • Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.

                                      • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.

                                      • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuned on their own output which "inbreeds" the model.

                                      • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?

                                      What I am getting at is this is not a problem companies seem interested in solving.

                                      1 Reply Last reply
                                      0
                                      • db0@lemmy.dbzer0.comD [email protected]

                                        As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

                                        1 This user is from outside of this forum
                                        1 This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #19

                                        The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

                                        Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

                                        db0@lemmy.dbzer0.comD 1 Reply Last reply
                                        0
                                        • B [email protected]

                                          What temperature and sampling settings? Which models?

                                          I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

                                          I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

                                          My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

                                          1 This user is from outside of this forum
                                          1 This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #20

                                          I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

                                          B I 2 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups