AI chatbots unable to accurately summarise news, BBC finds
-
Yes, I think it would be naive to expect humans to design something capable of what humans are not.
We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.
-
I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included
Sorry for being vague, I just didn't want to post my home town on here
You can say Space Needle. We get it.
-
This post did not contain any content.
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
-
We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.
Qualitatively.
-
Qualitatively.
That response doesn't make sense. Please clarify.
-
What temperature and sampling settings? Which models?
I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.
They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.
-
Whoops, yeah, should have linked the blog.
I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?
Looks pretty interesting, thanks for sharing it
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
Is it worse than the current system of editors making shitty click bait titles?
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
Do you dislike ai?
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
-
This post did not contain any content.
You don't say.
-
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
It's easy, it's quick, and it's free: pouring river water in your socks.
Fortunately, there are other possible criteria. -
Funny, I find the BBC unable to accurately convey the news
Dunno why you're being downvoted. If you're wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of "celebrity" frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being "left leaning" when it's blatantly not.
-
That response doesn't make sense. Please clarify.
A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that's how I used those words.
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
How good are the human answers? I mean, I expect that an AI's error rate is currently higher than an "expert" in their field.
But I'd guess the AI is quite a bit better than, say, the average Republican.
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
I'll be here begging for a miserable 1 million to invest in some freaking trains qnd bicycle paths. Thanks.
-
This post did not contain any content.
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
-
Do you dislike ai?
I don't necessarily dislike "AI" but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.
Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.
-
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
Flip a coin every time you read an article whether you get quick and easy significant issues
-
A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that's how I used those words.
Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.