AI chatbots unable to accurately summarise news, BBC finds
-
I'm more interested in the technology itself, rather than its current application.
I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren't cheering her on: they're sitting in their recliners, smugly claiming she's useless. She can't even participate in a marathon, let alone compete with actual athletes!
Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.
It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. The result couldn’t be different.
-
But every techbro on the planet told me it's exactly what LLMs are good at. What the hell!? /s
Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask "how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?" They don't know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.
-
Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".
Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.
-
"I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer"
You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.
You can also just have an application designed to do that do it more accurately.
If you can't do that you're not an engineer. If you don't recommend that you're not an engineer.
-
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.
Now ask it whether Taiwan is a country.
-
While not as academically cogent as your response
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
The metaphor is correct, I think it's some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it's a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone's (or some crowd's) responses well enough to play them. So I'd say commercially they already are successful.
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
We-ell, it's just hard to describe the idea without using that word, but I haven't even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
Unintentional I assure you.
I think it’s some social mechanism making them choose a brute force solution first.
I feel like it's simpler than that. Ye olde "when all you have is a hammer, everything's a nail". Or in this case, when you've built the most complex hammer in history, you want everything to be a nail.
So I’d say commercially they already are successful.
Definitely. I'll never write another cover letter. In their use-case, they're solid.
but I haven’t even finished my BS yet
Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.
-
What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you're using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.So many arguments... Wow!
-
Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.
So the ads are the problem? Do you have a link to such an ad?
-
So the ads are the problem? Do you have a link to such an ad?
Not ads, whole governments talking about it and funding that crap like Altman/Musk in the USA or Macron in Europe.
-
Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.
that's the core problem though, isn't it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems
-
Now ask it whether Taiwan is a country.
That depends on if you ask the online app (which will cut you off or give you a CCP sanctioned answer) or run it locally.
-
that's the core problem though, isn't it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems
Well, "we" arent' but there's a hype machine in operation bigger than anything in history because a few tech bros think they're going to rule the world.
-
Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask "how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?" They don't know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.
the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though
-
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
Extremely?
-
What temperature and sampling settings? Which models?
I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
-
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
benchmarks
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
-
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
Why do you say that? I have had no reason to doubt their reporting
-
Why do you say that? I have had no reason to doubt their reporting
Look at their reporting of the Employment Tribunal for the nurse from Five who was sacked for abusing a doctor. They refused to correctly gender the doctor correctly in every article to a point where the lack of any pronoun other than the sacked transphobe referring to her with "him". They also very much paint it like it is Dr Upton on trial and not Ms Peggie.
-
benchmarks
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1.
I used their app, but not for anything sensitive.I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
-
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1.
I used their app, but not for anything sensitive.I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
Heh, only obscure ones that they can't game, and only if they fit your use case. One example is the ones in EQ bench: https://eqbench.com/
…And again, the best mix of models depends on your use case.
I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.