AI chatbots unable to accurately summarise news, BBC finds
-
Neither are my parents
-
Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.
-
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
Agreed. The techbros pretending that the stochastic parrots they've created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they're trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they've promised that this one technique (more or less, I know it's more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it's supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we're dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I've used a whole lot of cheap metaphor on someone who causally dropped 'syllogism' into a conversation, I'm feeling like a freshmen in a grad level class. I'll admit I'm nowhere near up to date on specific models and bleeding edge techniques.
-
But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **
-
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
-
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.
-
ShockedPikachu.svg
-
I work in tech and can confirm the the vast majority of engineers "dislike ai" and are disillusioned with AI tools. Even ones that work on AI/ML tools. It's fewer and fewer people the higher up the pay scale you go.
There isn't a single complex coding problem an AI can solve. If you don't understand something and it helps you write it I'll close the MR and delete your code since it's worthless. You have to understand what you write. I do not care if it works. You have to understand every line.
"But I use it just fine and I'm an..."
Then you're not an engineer and you shouldn't have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.
-
If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son's preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.
AI "works" because you're asking questions you don't know and it's just putting words together so they make sense without regard to accuracy. It's a hard limit of "AI" that we've hit. It won't get better in our lifetimes.
-
I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.
There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles.
-
Surprisingly, yes
-
You know what... now that you say it, it really is just like the anti-Wikipedia stuff.
-
I'm more interested in the technology itself, rather than its current application.
I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren't cheering her on: they're sitting in their recliners, smugly claiming she's useless. She can't even participate in a marathon, let alone compete with actual athletes!
Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.
-
While not as academically cogent as your response
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
The metaphor is correct, I think it's some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it's a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone's (or some crowd's) responses well enough to play them. So I'd say commercially they already are successful.
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
We-ell, it's just hard to describe the idea without using that word, but I haven't even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.
-
-
It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. The result couldn’t be different.
-
Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask "how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?" They don't know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.
-
Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.
-
You can also just have an application designed to do that do it more accurately.
If you can't do that you're not an engineer. If you don't recommend that you're not an engineer.
-
Now ask it whether Taiwan is a country.