AI chatbots unable to accurately summarise news, BBC finds
-
Summary of the Article
Title: AI chatbots unable to accurately summarise news, BBC finds
Date: February 11, 2025 (Published 3 hours ago)
Author: Imran Rahman-Jones, Technology ReporterKey Findings:
The BBC conducted a study on four major AI chatbots—OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI—to assess their ability to accurately summarize news content. The findings revealed significant inaccuracies and distortions in the chatbots' summaries, raising concerns about misinformation.
51% of AI-generated summaries contained significant issues.
19% of responses citing BBC content included factual errors, such as incorrect dates, numbers, or statements.
The AI chatbots struggled to differentiate between fact and opinion, often editorializing or omitting crucial context.
Examples of AI-generated misinformation:
Gemini falsely stated that the NHS does not recommend vaping as a smoking cessation aid.
ChatGPT and Copilot claimed Rishi Sunak and Nicola Sturgeon were still in office after they had stepped down.
Perplexity AI misquoted BBC News on the Middle East conflict, saying Iran initially showed "restraint" and that Israel’s actions were "aggressive"—misrepresenting the original reporting.
BBC's Response & Call for Change:
Deborah Turness, CEO of BBC News and Current Affairs, warned of the risks posed by AI-generated misinformation and called on AI companies to take action. She urged developers to "pull back" their AI news summarization features, citing Apple's decision to pause its AI news summaries after complaints from the BBC.
The BBC briefly allowed AI bots access to its site for testing in December 2024 but generally blocks them. It now seeks to work with AI companies to improve accuracy while ensuring publishers maintain control over their content.
AI Companies' Response:
OpenAI stated that it aims to support publishers by improving citation accuracy and respecting content restrictions via tools like robots.txt (which allows websites to block AI bots).
The other companies (Microsoft, Google, Perplexity) have not yet commented on the BBC’s findings.
Conclusion:
The BBC’s research underscores serious reliability issues in AI-generated news summaries, with some models performing worse than others. Microsoft’s Copilot and Google’s Gemini had more significant accuracy problems compared to OpenAI’s ChatGPT and Perplexity. The study raises concerns about the potential real-world harm caused by AI misinformation and emphasizes the need for AI developers to improve transparency and accountability in news summarization.
It's not that bad. I don't really use it for this so maybe I got lucky but saying they are unable to seems like a stretch.
-
Bing/chatgpt is just as bad. It loves to tell you it's doing something and then just ignores you completely.
-
It is stated as 51% problematic, so maybe your coin flip was successful this time.
-
Whoops, yeah, should have linked the blog.
I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?
-
Do you blindly trust the output or is it just a convenience and you can spot when there's something wrong? Because I really hope you don't rely on it.
-
Fuckin news!
-
How could I blindly trust anything in this context?
-
We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.
-
You can say Space Needle. We get it.
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
-
Qualitatively.
-
That response doesn't make sense. Please clarify.
-
They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.
-
Looks pretty interesting, thanks for sharing it
-
Is it worse than the current system of editors making shitty click bait titles?
-
Do you dislike ai?
-
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
-
You don't say.
-
It's easy, it's quick, and it's free: pouring river water in your socks.
Fortunately, there are other possible criteria. -
Dunno why you're being downvoted. If you're wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of "celebrity" frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being "left leaning" when it's blatantly not.