AI chatbots unable to accurately summarise news, BBC finds
-
You can say Space Needle. We get it.
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
-
-
-
They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.
-
Looks pretty interesting, thanks for sharing it
-
Is it worse than the current system of editors making shitty click bait titles?
-
Do you dislike ai?
-
-
You don't say.
-
It's easy, it's quick, and it's free: pouring river water in your socks.
Fortunately, there are other possible criteria. -
Dunno why you're being downvoted. If you're wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of "celebrity" frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being "left leaning" when it's blatantly not.
-
A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that's how I used those words.
-
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
How good are the human answers? I mean, I expect that an AI's error rate is currently higher than an "expert" in their field.
But I'd guess the AI is quite a bit better than, say, the average Republican.
-
I'll be here begging for a miserable 1 million to invest in some freaking trains qnd bicycle paths. Thanks.
-
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
-
I don't necessarily dislike "AI" but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.
Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.
-
Flip a coin every time you read an article whether you get quick and easy significant issues
-
Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.
-
Yeah, out of all the generative AI fields, voice generation at this point is like 95% there in its capability of producing convincing speech even with consumer level tech like ElevenLabs. That last 5% might not even be solvable currently, as it's those moments it gets the feeling, intonation or pronunciation wrong when the only context you give it is a text input.
Especially voice cloning - the DRG Cortana Mission Control mod is one of the examples I like to use.