AI chatbots unable to accurately summarise news, BBC finds
-
-
Which is hilarious, because most of the shit out there today seems to be written by them.
-
-
-
Especially after the open source release of DeepSeak... What...?
-
-
Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".
-
This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.
-
Funny, I find the BBC unable to accurately convey the news
-
Why, where they trained using MAIN STREAM NEWS? That could explain it.
-
Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.
-
-
Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.
-
Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.
-
It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuned on their own output which "inbreeds" the model.
-
And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?
What I am getting at is this is not a problem companies seem interested in solving.
-
-
The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later
Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things
-
I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something
-
Lemmy is understandably sympathetic to self-hosted LLMs, but I get chewed out or even banned literally anywhere else.
In this fandom I'm in, there used to be enthusiasm for a "community enhancement" of a show since the official release looks terrible. Years later, I don't even mention the word "AI," just the idea of restoration (now that we have the tools to do it), and I get bombed and threadlocked.
-
Yes, I think it would be naive to expect humans to design something capable of what humans are not.
-
The problem is that the "train of the thought" is also hallucinations. It might make the model better with more compute but it's diminishing rewards.
Rpg can use the llms because they're not critical. If the llm spews out nonsense you don't like, you just ask to redo, because it's all subjective.
-
For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to "categorize" text... which no one has really worked on.
I don't think the corporate APIs or UIs even do this.
You are not wrong, but it's just not done for some reason.
-
Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.
Gemini 1.5 is literally better than the new 2.0 in some of my tests, especially long-context ones.
-
I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included
Sorry for being vague, I just didn't want to post my home town on here