AI chatbots unable to accurately summarise news, BBC finds
-
BBC finds lol. No, we slresdy knew about that
-
-
What temperature and sampling settings? Which models?
I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.
-
Which is hilarious, because most of the shit out there today seems to be written by them.
-
Zephyra just came out, seems sick:
There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.
-
Nonsense, I use it a ton for science and engineering, it saves me SO much time!
-
Especially after the open source release of DeepSeak... What...?
-
I don’t think giving the temperature knob to end users is the answer.
Turning it to max for max correctness and low creativity won’t work in an intuitive way.
Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.
Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”
Not everyone is an engineer. Temp is an obtuse thing.
-
Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".
-
This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.
-
Funny, I find the BBC unable to accurately convey the news
-
Why, where they trained using MAIN STREAM NEWS? That could explain it.
-
Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.
-
-
Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.
-
Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.
-
It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuned on their own output which "inbreeds" the model.
-
And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?
What I am getting at is this is not a problem companies seem interested in solving.
-
-
The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later
Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things
-
I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something
-
Lemmy is understandably sympathetic to self-hosted LLMs, but I get chewed out or even banned literally anywhere else.
In this fandom I'm in, there used to be enthusiasm for a "community enhancement" of a show since the official release looks terrible. Years later, I don't even mention the word "AI," just the idea of restoration (now that we have the tools to do it), and I get bombed and threadlocked.
-
Yes, I think it would be naive to expect humans to design something capable of what humans are not.
-
The problem is that the "train of the thought" is also hallucinations. It might make the model better with more compute but it's diminishing rewards.
Rpg can use the llms because they're not critical. If the llm spews out nonsense you don't like, you just ask to redo, because it's all subjective.
-
For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to "categorize" text... which no one has really worked on.
I don't think the corporate APIs or UIs even do this.
You are not wrong, but it's just not done for some reason.