Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. AI chatbots unable to accurately summarise news, BBC finds

AI chatbots unable to accurately summarise news, BBC finds

Scheduled Pinned Locked Moved Technology
technology
133 Posts 74 Posters 1.2k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B [email protected]

    What temperature and sampling settings? Which models?

    I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

    I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

    My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

    jrs100000@lemmy.worldJ This user is from outside of this forum
    jrs100000@lemmy.worldJ This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #41

    They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

    1 Reply Last reply
    0
    • B [email protected]

      Whoops, yeah, should have linked the blog.

      I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?

      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #42

      Looks pretty interesting, thanks for sharing it

      1 Reply Last reply
      0
      • H [email protected]

        Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

        It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

        It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

        Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

        Introduced factual errors

        Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #43

        Is it worse than the current system of editors making shitty click bait titles?

        H 1 Reply Last reply
        0
        • H [email protected]

          Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

          It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

          It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

          Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

          Introduced factual errors

          Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #44

          Do you dislike ai?

          F W 2 Replies Last reply
          0
          • H [email protected]

            Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

            It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

            It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

            Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

            Introduced factual errors

            Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #45

            alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.

            N itslilith@lemmy.blahaj.zoneI 2 Replies Last reply
            0
            • misk@sopuli.xyzM [email protected]
              This post did not contain any content.
              E This user is from outside of this forum
              E This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #46

              You don't say.

              1 Reply Last reply
              0
              • D [email protected]

                alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.

                N This user is from outside of this forum
                N This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #47

                It's easy, it's quick, and it's free: pouring river water in your socks.
                Fortunately, there are other possible criteria.

                1 Reply Last reply
                0
                • paradox@lemdro.idP [email protected]

                  Funny, I find the BBC unable to accurately convey the news

                  addie@feddit.ukA This user is from outside of this forum
                  addie@feddit.ukA This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #48

                  Dunno why you're being downvoted. If you're wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of "celebrity" frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being "left leaning" when it's blatantly not.

                  1 Reply Last reply
                  0
                  • M [email protected]

                    That response doesn't make sense. Please clarify.

                    R This user is from outside of this forum
                    R This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #49

                    A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.

                    Anyway, that's how I used those words.

                    M 1 Reply Last reply
                    0
                    • H [email protected]

                      Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

                      It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

                      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

                      Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

                      Introduced factual errors

                      Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

                      rivalarrival@lemmy.todayR This user is from outside of this forum
                      rivalarrival@lemmy.todayR This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #50

                      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

                      How good are the human answers? I mean, I expect that an AI's error rate is currently higher than an "expert" in their field.

                      But I'd guess the AI is quite a bit better than, say, the average Republican.

                      B 1 Reply Last reply
                      0
                      • H [email protected]

                        Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

                        It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

                        It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

                        Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

                        Introduced factual errors

                        Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

                        D This user is from outside of this forum
                        D This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #51

                        I'll be here begging for a miserable 1 million to invest in some freaking trains qnd bicycle paths. Thanks.

                        1 Reply Last reply
                        0
                        • misk@sopuli.xyzM [email protected]
                          This post did not contain any content.
                          T This user is from outside of this forum
                          T This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #52

                          But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

                          M 1 Reply Last reply
                          0
                          • S [email protected]

                            Do you dislike ai?

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #53

                            I don't necessarily dislike "AI" but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

                            Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

                            1 Reply Last reply
                            0
                            • D [email protected]

                              alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.

                              itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
                              itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #54

                              Flip a coin every time you read an article whether you get quick and easy significant issues

                              1 Reply Last reply
                              0
                              • R [email protected]

                                A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.

                                Anyway, that's how I used those words.

                                M This user is from outside of this forum
                                M This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #55

                                Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.

                                R 1 Reply Last reply
                                0
                                • tal@lemmy.todayT [email protected]

                                  They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a point in their favor.

                                  johnedwa@sopuli.xyzJ This user is from outside of this forum
                                  johnedwa@sopuli.xyzJ This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #56

                                  Yeah, out of all the generative AI fields, voice generation at this point is like 95% there in its capability of producing convincing speech even with consumer level tech like ElevenLabs. That last 5% might not even be solvable currently, as it's those moments it gets the feeling, intonation or pronunciation wrong when the only context you give it is a text input.

                                  Especially voice cloning - the DRG Cortana Mission Control mod is one of the examples I like to use.

                                  1 Reply Last reply
                                  0
                                  • E [email protected]

                                    How could I blindly trust anything in this context?

                                    O This user is from outside of this forum
                                    O This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #57

                                    Y'know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

                                    Do you just blindly believe whatever it tells you?

                                    It's not absolutely perfect, so it's useless.

                                    It's all just garbage information!

                                    This is terrible for jobs, society, and the environment!

                                    E 1 Reply Last reply
                                    0
                                    • misk@sopuli.xyzM [email protected]
                                      This post did not contain any content.
                                      P This user is from outside of this forum
                                      P This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #58

                                      I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.

                                      R K 2 Replies Last reply
                                      0
                                      • misk@sopuli.xyzM [email protected]
                                        This post did not contain any content.
                                        T This user is from outside of this forum
                                        T This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #59

                                        BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

                                        K 1 Reply Last reply
                                        0
                                        • M [email protected]

                                          Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.

                                          R This user is from outside of this forum
                                          R This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #60

                                          The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

                                          Yes.

                                          But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.

                                          That's fundamentally solvable.

                                          I'm not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it's something general, we in fact want something thinking like a human.

                                          What all these companies like DeepSeek and OpenAI and others are doing lately, with some "chain-of-thought" model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they've invested so much data is a minor part which doesn't have to be so powerful.

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups