AI chatbots unable to accurately summarise news, BBC finds
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
I'll be here begging for a miserable 1 million to invest in some freaking trains qnd bicycle paths. Thanks.
-
This post did not contain any content.
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
-
Do you dislike ai?
I don't necessarily dislike "AI" but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.
Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.
-
alternatively: 49% had no significant issues and 81% had no factual errors, it's not perfect but it's cheap quick and easy.
Flip a coin every time you read an article whether you get quick and easy significant issues
-
A human can move, a car can move. a human can't move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that's how I used those words.
Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.
-
They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a point in their favor.
Yeah, out of all the generative AI fields, voice generation at this point is like 95% there in its capability of producing convincing speech even with consumer level tech like ElevenLabs. That last 5% might not even be solvable currently, as it's those moments it gets the feeling, intonation or pronunciation wrong when the only context you give it is a text input.
Especially voice cloning - the DRG Cortana Mission Control mod is one of the examples I like to use.
-
How could I blindly trust anything in this context?
Y'know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.
Do you just blindly believe whatever it tells you?
It's not absolutely perfect, so it's useless.
It's all just garbage information!
This is terrible for jobs, society, and the environment!
-
This post did not contain any content.
I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.
-
This post did not contain any content.
BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline
-
Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.
The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
Yes.
But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.
That's fundamentally solvable.
I'm not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it's something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some "chain-of-thought" model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they've invested so much data is a minor part which doesn't have to be so powerful.
-
This post did not contain any content.
But every techbro on the planet told me it's exactly what LLMs are good at. What the hell!? /s
-
This post did not contain any content.
News station finds that AI is unable to perform the job of a news station
-
How could I blindly trust anything in this context?
In which case you probably aren't saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence
-
Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".
What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you're using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite. -
This post did not contain any content.
Neither are my parents
-
In which case you probably aren't saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence
Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.
-
The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
Yes.
But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.
That's fundamentally solvable.
I'm not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it's something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some "chain-of-thought" model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they've invested so much data is a minor part which doesn't have to be so powerful.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
Agreed. The techbros pretending that the stochastic parrots they've created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they're trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they've promised that this one technique (more or less, I know it's more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it's supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we're dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I've used a whole lot of cheap metaphor on someone who causally dropped 'syllogism' into a conversation, I'm feeling like a freshmen in a grad level class. I'll admit I'm nowhere near up to date on specific models and bleeding edge techniques.
-
This post did not contain any content.
But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **
-
I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
-
This post did not contain any content.
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.