Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. AI chatbots unable to accurately summarise news, BBC finds

AI chatbots unable to accurately summarise news, BBC finds

Scheduled Pinned Locked Moved Technology
technology
133 Posts 74 Posters 1.2k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E [email protected]

    This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #24

    For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to "categorize" text... which no one has really worked on.

    I don't think the corporate APIs or UIs even do this.

    You are not wrong, but it's just not done for some reason.

    1 Reply Last reply
    0
    • 1 [email protected]

      I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

      B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #25

      Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

      Gemini 1.5 is literally better than the new 2.0 in some of my tests, especially long-context ones.

      1 Reply Last reply
      0
      • db0@lemmy.dbzer0.comD [email protected]

        As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

        K This user is from outside of this forum
        K This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #26

        I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included

        Sorry for being vague, I just didn't want to post my home town on here

        H 1 Reply Last reply
        0
        • B [email protected]

          Zephyra just came out, seems sick:

          https://huggingface.co/Zyphra

          There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.

          A This user is from outside of this forum
          A This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #27

          A website with zero information, and barely anything on their huggingface page. What’s exciting about this?

          B 1 Reply Last reply
          0
          • db0@lemmy.dbzer0.comD [email protected]

            As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

            kat@orbi.campK This user is from outside of this forum
            kat@orbi.campK This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #28

            Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

            1 Reply Last reply
            0
            • misk@sopuli.xyzM [email protected]
              This post did not contain any content.
              G This user is from outside of this forum
              G This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #29

              Summary of the Article

              Title: AI chatbots unable to accurately summarise news, BBC finds
              Date: February 11, 2025 (Published 3 hours ago)
              Author: Imran Rahman-Jones, Technology Reporter

              Key Findings:

              The BBC conducted a study on four major AI chatbots—OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI—to assess their ability to accurately summarize news content. The findings revealed significant inaccuracies and distortions in the chatbots' summaries, raising concerns about misinformation.

              51% of AI-generated summaries contained significant issues.

              19% of responses citing BBC content included factual errors, such as incorrect dates, numbers, or statements.

              The AI chatbots struggled to differentiate between fact and opinion, often editorializing or omitting crucial context.

              Examples of AI-generated misinformation:

              Gemini falsely stated that the NHS does not recommend vaping as a smoking cessation aid.

              ChatGPT and Copilot claimed Rishi Sunak and Nicola Sturgeon were still in office after they had stepped down.

              Perplexity AI misquoted BBC News on the Middle East conflict, saying Iran initially showed "restraint" and that Israel’s actions were "aggressive"—misrepresenting the original reporting.

              BBC's Response & Call for Change:

              Deborah Turness, CEO of BBC News and Current Affairs, warned of the risks posed by AI-generated misinformation and called on AI companies to take action. She urged developers to "pull back" their AI news summarization features, citing Apple's decision to pause its AI news summaries after complaints from the BBC.

              The BBC briefly allowed AI bots access to its site for testing in December 2024 but generally blocks them. It now seeks to work with AI companies to improve accuracy while ensuring publishers maintain control over their content.

              AI Companies' Response:

              OpenAI stated that it aims to support publishers by improving citation accuracy and respecting content restrictions via tools like robots.txt (which allows websites to block AI bots).

              The other companies (Microsoft, Google, Perplexity) have not yet commented on the BBC’s findings.

              Conclusion:

              The BBC’s research underscores serious reliability issues in AI-generated news summaries, with some models performing worse than others. Microsoft’s Copilot and Google’s Gemini had more significant accuracy problems compared to OpenAI’s ChatGPT and Perplexity. The study raises concerns about the potential real-world harm caused by AI misinformation and emphasizes the need for AI developers to improve transparency and accountability in news summarization.

              It's not that bad. I don't really use it for this so maybe I got lucky but saying they are unable to seems like a stretch.

              N 1 Reply Last reply
              0
              • 1 [email protected]

                I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

                I This user is from outside of this forum
                I This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #30

                Bing/chatgpt is just as bad. It loves to tell you it's doing something and then just ignores you completely.

                1 Reply Last reply
                0
                • G [email protected]

                  Summary of the Article

                  Title: AI chatbots unable to accurately summarise news, BBC finds
                  Date: February 11, 2025 (Published 3 hours ago)
                  Author: Imran Rahman-Jones, Technology Reporter

                  Key Findings:

                  The BBC conducted a study on four major AI chatbots—OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI—to assess their ability to accurately summarize news content. The findings revealed significant inaccuracies and distortions in the chatbots' summaries, raising concerns about misinformation.

                  51% of AI-generated summaries contained significant issues.

                  19% of responses citing BBC content included factual errors, such as incorrect dates, numbers, or statements.

                  The AI chatbots struggled to differentiate between fact and opinion, often editorializing or omitting crucial context.

                  Examples of AI-generated misinformation:

                  Gemini falsely stated that the NHS does not recommend vaping as a smoking cessation aid.

                  ChatGPT and Copilot claimed Rishi Sunak and Nicola Sturgeon were still in office after they had stepped down.

                  Perplexity AI misquoted BBC News on the Middle East conflict, saying Iran initially showed "restraint" and that Israel’s actions were "aggressive"—misrepresenting the original reporting.

                  BBC's Response & Call for Change:

                  Deborah Turness, CEO of BBC News and Current Affairs, warned of the risks posed by AI-generated misinformation and called on AI companies to take action. She urged developers to "pull back" their AI news summarization features, citing Apple's decision to pause its AI news summaries after complaints from the BBC.

                  The BBC briefly allowed AI bots access to its site for testing in December 2024 but generally blocks them. It now seeks to work with AI companies to improve accuracy while ensuring publishers maintain control over their content.

                  AI Companies' Response:

                  OpenAI stated that it aims to support publishers by improving citation accuracy and respecting content restrictions via tools like robots.txt (which allows websites to block AI bots).

                  The other companies (Microsoft, Google, Perplexity) have not yet commented on the BBC’s findings.

                  Conclusion:

                  The BBC’s research underscores serious reliability issues in AI-generated news summaries, with some models performing worse than others. Microsoft’s Copilot and Google’s Gemini had more significant accuracy problems compared to OpenAI’s ChatGPT and Perplexity. The study raises concerns about the potential real-world harm caused by AI misinformation and emphasizes the need for AI developers to improve transparency and accountability in news summarization.

                  It's not that bad. I don't really use it for this so maybe I got lucky but saying they are unable to seems like a stretch.

                  N This user is from outside of this forum
                  N This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #31

                  It is stated as 51% problematic, so maybe your coin flip was successful this time.

                  1 Reply Last reply
                  0
                  • A [email protected]

                    A website with zero information, and barely anything on their huggingface page. What’s exciting about this?

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #32

                    Whoops, yeah, should have linked the blog.

                    I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?

                    A 1 Reply Last reply
                    0
                    • E [email protected]

                      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #33

                      Do you blindly trust the output or is it just a convenience and you can spot when there's something wrong? Because I really hope you don't rely on it.

                      E 1 Reply Last reply
                      0
                      • misk@sopuli.xyzM [email protected]
                        This post did not contain any content.
                        U This user is from outside of this forum
                        U This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #34

                        Fuckin news!

                        1 Reply Last reply
                        0
                        • A [email protected]

                          Do you blindly trust the output or is it just a convenience and you can spot when there's something wrong? Because I really hope you don't rely on it.

                          E This user is from outside of this forum
                          E This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #35

                          How could I blindly trust anything in this context?

                          O N 2 Replies Last reply
                          0
                          • R [email protected]

                            Yes, I think it would be naive to expect humans to design something capable of what humans are not.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #36

                            We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.

                            R 1 Reply Last reply
                            0
                            • K [email protected]

                              I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included

                              Sorry for being vague, I just didn't want to post my home town on here

                              H This user is from outside of this forum
                              H This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #37

                              You can say Space Needle. We get it.

                              1 Reply Last reply
                              0
                              • misk@sopuli.xyzM [email protected]
                                This post did not contain any content.
                                H This user is from outside of this forum
                                H This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #38

                                Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

                                It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

                                It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

                                Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

                                Introduced factual errors

                                Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

                                M S D rivalarrival@lemmy.todayR D 6 Replies Last reply
                                0
                                • M [email protected]

                                  We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.

                                  R This user is from outside of this forum
                                  R This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #39

                                  Qualitatively.

                                  M 1 Reply Last reply
                                  0
                                  • R [email protected]

                                    Qualitatively.

                                    M This user is from outside of this forum
                                    M This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #40

                                    That response doesn't make sense. Please clarify.

                                    R 1 Reply Last reply
                                    0
                                    • B [email protected]

                                      What temperature and sampling settings? Which models?

                                      I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

                                      I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

                                      My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

                                      jrs100000@lemmy.worldJ This user is from outside of this forum
                                      jrs100000@lemmy.worldJ This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #41

                                      They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

                                      1 Reply Last reply
                                      0
                                      • B [email protected]

                                        Whoops, yeah, should have linked the blog.

                                        I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #42

                                        Looks pretty interesting, thanks for sharing it

                                        1 Reply Last reply
                                        0
                                        • H [email protected]

                                          Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

                                          It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

                                          It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

                                          Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

                                          Introduced factual errors

                                          Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #43

                                          Is it worse than the current system of editors making shitty click bait titles?

                                          H 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups