AI chatbots unable to accurately summarise news, BBC finds
-
Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask "how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?" They don't know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.
the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though
-
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
Extremely?
-
What temperature and sampling settings? Which models?
I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
-
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
benchmarks
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
-
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
Why do you say that? I have had no reason to doubt their reporting
-
Why do you say that? I have had no reason to doubt their reporting
Look at their reporting of the Employment Tribunal for the nurse from Five who was sacked for abusing a doctor. They refused to correctly gender the doctor correctly in every article to a point where the lack of any pronoun other than the sacked transphobe referring to her with "him". They also very much paint it like it is Dr Upton on trial and not Ms Peggie.
-
benchmarks
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1.
I used their app, but not for anything sensitive.I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
-
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1.
I used their app, but not for anything sensitive.I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
Heh, only obscure ones that they can't game, and only if they fit your use case. One example is the ones in EQ bench: https://eqbench.com/
…And again, the best mix of models depends on your use case.
I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.
-
So many arguments... Wow!
Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you're looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?
-
This post did not contain any content.
That's why I avoid them like the plague. I've even changed almost every platform I'm using to get away from the AI-pocalypse.
-
Why do you say that? I have had no reason to doubt their reporting
It's a "how the mighty have fallen" kind of thing. They are well into the click-bait farm mentality now - have been for a while.
It's present on the news sites, but far worse on things where they know they steer opinion and discourse.
They used to ensure political parties has coverage inline with their support, but for like 10 years prior to Brexit, they gave Farage and his Jackasses hugely disproportionate coverage - like 20X more than their base. This was at a time when SNP were doing very well and were frequently seen less than 2006 to 2009.Current reporting is heavily spun and they definitely aren't the worst in the world, but the are also definitely not the bastion of unbiased news I grew up with.
Until relatively recently you could see the deterioration by flipping to the world service, but that's fallen into line now.
If you have the time to follow independent journalists the problem becomes clearer, if not, look at output from parody news sites - it's telling that Private Eye and Newsthump manage the criticism that the BBC can't seem to get too
Go look at the bylinetimes.com front page, grab a random stort and compare coverage with the BBC. One of these is crowd funded reporters and the other a national news site with great funding and legal obligations to report in the public interest.
I don't hate them, they just need to be better.
-
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.
Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?
Or do you mean that you tried having DeepSeek summarise a couple of articles, didn't see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others' evaluations of) information tools which might help or help destroy democracy.
-
Funny, I find the BBC unable to accurately convey the news
Yeah, haha
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"
Perplexity did fail to summarize the article, but it did correct it.
-
I work in tech and can confirm the the vast majority of engineers "dislike ai" and are disillusioned with AI tools. Even ones that work on AI/ML tools. It's fewer and fewer people the higher up the pay scale you go.
There isn't a single complex coding problem an AI can solve. If you don't understand something and it helps you write it I'll close the MR and delete your code since it's worthless. You have to understand what you write. I do not care if it works. You have to understand every line.
"But I use it just fine and I'm an..."
Then you're not an engineer and you shouldn't have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.
That's some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.
I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.
Accept that you probably already have merged code that was generated by AI and it's totally fine as long as tests are passing and it fits the architecture.
-
BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline
Some examples of inaccuracies found by the BBC included:
Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" *and described Israel's actions as "aggressive"*
-
Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you're looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?
"If you try hard you might find arguments for my side"
What kind of meta-argument is that supposed to be?
-
That's why I avoid them like the plague. I've even changed almost every platform I'm using to get away from the AI-pocalypse.
I can't stand the corporate double think.
Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it's still apparently going to replace humans. How do they come to that conclusion?
The world won't be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.
Que global economic collapse.
-
the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though
It's not like they're flat earthers they are not conspiracy theorists. They have been told by the media, businesses, and every goddamn YouTuber that AI is the future.
I don't think they are idiots I just think they are being lied to and are a bit gullible. But it's not worth having the argument with them, AI is going to fail on its own it doesn't matter what they think.
-
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
Could you tell me what you use it for because I legitimately don't understand what I'm supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I'm heading IT, so I'm supposed to be able to come up with some kind of answer and yet I have nothing. Even putting the side the fact that it probably doesn't work as advertised, I still can't really think of a use for it.
The main problem is it won't be able to operate our ancient and convoluted ticketing system, so it can't actually help.
Everyone I've ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
-
I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.
Treat LLMs like a super knowledgeable, enthusiastic, arrogant, unimaginative intern.