AI chatbots unable to accurately summarise news, BBC finds
-
This post did not contain any content.
I recently had one chatbot refuse to answer a couple of questions, and another delete my question after warning me that my question was verging on breaking its rule... never happened before, thought it was interesting.
-
Great for turning complex into simple.
Bad for turning simple into complex.
I think my largest gripe with it is it can't actually do anything. It can just tell you about stuff.
I can ask it how to change the desktop background on my computer and it will 100% be able to tell me, but if you then prompt it to change the background itself it won't be able to. It has zero ability to interact with the computer, this is even the case with AI run locally.
It can't move the mouse around it can't send keyboard commands.
-
I can't stand the corporate double think.
Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it's still apparently going to replace humans. How do they come to that conclusion?
The world won't be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.
Que global economic collapse.
It's a race, and bullshitting brings venture capital and therefore an advantage.
99.9% of AI companies will go belly up when Investors start asking for results.
-
"If you try hard you might find arguments for my side"
What kind of meta-argument is that supposed to be?
If you read what people write, you will understand what they're trying to tell you. Shocking concept, I know. It's much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.
-
I think my largest gripe with it is it can't actually do anything. It can just tell you about stuff.
I can ask it how to change the desktop background on my computer and it will 100% be able to tell me, but if you then prompt it to change the background itself it won't be able to. It has zero ability to interact with the computer, this is even the case with AI run locally.
It can't move the mouse around it can't send keyboard commands.
Um… yea? It’s not supposed to? Let’s ignore how dangerous and foolish it would be to allow llm’s admin control of a system. The thing that prevents it from doing that is well, the llm has no mechanism to do that. The best it could do is ask you to open a command line and give you some code to put in. Its kinda like asking siri to preheat your oven. It didn’t have access to your ovens system.
You COULD get a digital only stove, and the llm could be changed to give it to reach out side itself, but its not there yet, and with how much siri miss interprets things, there would be a lot more fires
-
If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son's preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.
AI "works" because you're asking questions you don't know and it's just putting words together so they make sense without regard to accuracy. It's a hard limit of "AI" that we've hit. It won't get better in our lifetimes.
Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.
-
Treat LLMs like a super knowledgeable, enthusiastic, arrogant, unimaginative intern.
I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON'T know much about.
-
Could you tell me what you use it for because I legitimately don't understand what I'm supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I'm heading IT, so I'm supposed to be able to come up with some kind of answer and yet I have nothing. Even putting the side the fact that it probably doesn't work as advertised, I still can't really think of a use for it.
The main problem is it won't be able to operate our ancient and convoluted ticketing system, so it can't actually help.
Everyone I've ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
I am a creative writer (as in, I write stories and stuff) or at least I used to be. Sometimes when talking to chatGPT about ideas for writing it can be interesting, but other times it is kinda annoying since I am more into fine tuning instead of having it innudate me with ideas that I don't find particularly interesting.
-
Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.
Lame platitude
-
If you read what people write, you will understand what they're trying to tell you. Shocking concept, I know. It's much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.
Wrong thread?
-
This post did not contain any content.
I'm pretty sure that every user of Apple Intelligence could've told you that. If AI is good at anything, it isn't things that require nuance and factual accuracy.
-
I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON'T know much about.
Oh yes. The LLM will lie to you, confidently.
-
That's why I avoid them like the plague. I've even changed almost every platform I'm using to get away from the AI-pocalypse.
No better time to get into self hosting!
-
That's some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.
I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.
Accept that you probably already have merged code that was generated by AI and it's totally fine as long as tests are passing and it fits the architecture.
You're supposed to gatekeep code. There is nothing wrong with gatekeeping things that aren't hobbies.
If someone can't explain every change they're making and why they chose to do it that way they're getting denied. The bar is low.
-
Oh yes. The LLM will lie to you, confidently.
Exactly. I think this is a good barometer of gauging whether or not you can trust it. Ask it about things you know you're good at or knowledgeable about. If it is giving good information, the type you would give out, then it is probably OK. If it is bullshitting you or making you go 'uhh, no, actually...' then you need to do more old-school research.
-
It's a race, and bullshitting brings venture capital and therefore an advantage.
99.9% of AI companies will go belly up when Investors start asking for results.
Yeah seriously just look at Sam Bankman-Fried and that Theranos dipshit. Both bullshitted their way into millions. Only difference is that Altman and Musk's bubbles haven't popped yet.
-
Um… yea? It’s not supposed to? Let’s ignore how dangerous and foolish it would be to allow llm’s admin control of a system. The thing that prevents it from doing that is well, the llm has no mechanism to do that. The best it could do is ask you to open a command line and give you some code to put in. Its kinda like asking siri to preheat your oven. It didn’t have access to your ovens system.
You COULD get a digital only stove, and the llm could be changed to give it to reach out side itself, but its not there yet, and with how much siri miss interprets things, there would be a lot more fires
It wouldn't have the administrative access. You don't need admin access to use a computer system you need admin access to configure stuff but there's no reason for the AI to have that.
Anyway if AI is going to be useful to businesses it needs to be able to interface with their legacy applications.
-
Some examples of inaccuracies found by the BBC included:
Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" *and described Israel's actions as "aggressive"*
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
I did not even read up to there but wow BBC really went there openly.
-
Could you tell me what you use it for because I legitimately don't understand what I'm supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I'm heading IT, so I'm supposed to be able to come up with some kind of answer and yet I have nothing. Even putting the side the fact that it probably doesn't work as advertised, I still can't really think of a use for it.
The main problem is it won't be able to operate our ancient and convoluted ticketing system, so it can't actually help.
Everyone I've ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
@echodot @Redex68 off top of my head, script generation. making content more readable. dictating a brain dump while walking and having it spit out a cohesive summary.
it's all about the prompt you put in. shit in/shit out. And making sure you check/understand what it spits out. and that sometimes it's garbage.
-