AI chatbots unable to accurately summarise news, BBC finds
-
Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.
Lame platitude
-
If you read what people write, you will understand what they're trying to tell you. Shocking concept, I know. It's much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.
Wrong thread?
-
This post did not contain any content.
I'm pretty sure that every user of Apple Intelligence could've told you that. If AI is good at anything, it isn't things that require nuance and factual accuracy.
-
I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON'T know much about.
Oh yes. The LLM will lie to you, confidently.
-
That's why I avoid them like the plague. I've even changed almost every platform I'm using to get away from the AI-pocalypse.
No better time to get into self hosting!
-
That's some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.
I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.
Accept that you probably already have merged code that was generated by AI and it's totally fine as long as tests are passing and it fits the architecture.
You're supposed to gatekeep code. There is nothing wrong with gatekeeping things that aren't hobbies.
If someone can't explain every change they're making and why they chose to do it that way they're getting denied. The bar is low.
-
Oh yes. The LLM will lie to you, confidently.
Exactly. I think this is a good barometer of gauging whether or not you can trust it. Ask it about things you know you're good at or knowledgeable about. If it is giving good information, the type you would give out, then it is probably OK. If it is bullshitting you or making you go 'uhh, no, actually...' then you need to do more old-school research.
-
It's a race, and bullshitting brings venture capital and therefore an advantage.
99.9% of AI companies will go belly up when Investors start asking for results.
Yeah seriously just look at Sam Bankman-Fried and that Theranos dipshit. Both bullshitted their way into millions. Only difference is that Altman and Musk's bubbles haven't popped yet.
-
Um… yea? It’s not supposed to? Let’s ignore how dangerous and foolish it would be to allow llm’s admin control of a system. The thing that prevents it from doing that is well, the llm has no mechanism to do that. The best it could do is ask you to open a command line and give you some code to put in. Its kinda like asking siri to preheat your oven. It didn’t have access to your ovens system.
You COULD get a digital only stove, and the llm could be changed to give it to reach out side itself, but its not there yet, and with how much siri miss interprets things, there would be a lot more fires
It wouldn't have the administrative access. You don't need admin access to use a computer system you need admin access to configure stuff but there's no reason for the AI to have that.
Anyway if AI is going to be useful to businesses it needs to be able to interface with their legacy applications.
-
Some examples of inaccuracies found by the BBC included:
Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" *and described Israel's actions as "aggressive"*
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
I did not even read up to there but wow BBC really went there openly.
-
Could you tell me what you use it for because I legitimately don't understand what I'm supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I'm heading IT, so I'm supposed to be able to come up with some kind of answer and yet I have nothing. Even putting the side the fact that it probably doesn't work as advertised, I still can't really think of a use for it.
The main problem is it won't be able to operate our ancient and convoluted ticketing system, so it can't actually help.
Everyone I've ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
@echodot @Redex68 off top of my head, script generation. making content more readable. dictating a brain dump while walking and having it spit out a cohesive summary.
it's all about the prompt you put in. shit in/shit out. And making sure you check/understand what it spits out. and that sometimes it's garbage.
-