AI chatbots unable to accurately summarise news, BBC finds
-
But every techbro on the planet told me it's exactly what LLMs are good at. What the hell!? /s
-
News station finds that AI is unable to perform the job of a news station
-
In which case you probably aren't saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence
-
What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you're using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite. -
Neither are my parents
-
Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.
-
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
Agreed. The techbros pretending that the stochastic parrots they've created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they're trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they've promised that this one technique (more or less, I know it's more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it's supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we're dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I've used a whole lot of cheap metaphor on someone who causally dropped 'syllogism' into a conversation, I'm feeling like a freshmen in a grad level class. I'll admit I'm nowhere near up to date on specific models and bleeding edge techniques.
-
But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **
-
This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.
-
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.
-
ShockedPikachu.svg
-
I work in tech and can confirm the the vast majority of engineers "dislike ai" and are disillusioned with AI tools. Even ones that work on AI/ML tools. It's fewer and fewer people the higher up the pay scale you go.
There isn't a single complex coding problem an AI can solve. If you don't understand something and it helps you write it I'll close the MR and delete your code since it's worthless. You have to understand what you write. I do not care if it works. You have to understand every line.
"But I use it just fine and I'm an..."
Then you're not an engineer and you shouldn't have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.
-
If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son's preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.
AI "works" because you're asking questions you don't know and it's just putting words together so they make sense without regard to accuracy. It's a hard limit of "AI" that we've hit. It won't get better in our lifetimes.
-
I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.
There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles.
-
Surprisingly, yes
-
You know what... now that you say it, it really is just like the anti-Wikipedia stuff.
-
I'm more interested in the technology itself, rather than its current application.
I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren't cheering her on: they're sitting in their recliners, smugly claiming she's useless. She can't even participate in a marathon, let alone compete with actual athletes!
Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.