Why I am not impressed by A.I.
-
[email protected]replied to [email protected] last edited by
Correct.
-
[email protected]replied to [email protected] last edited by
Yeah, I don't get why so many people seem to not get that.
The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.
I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.
I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.
-
[email protected]replied to [email protected] last edited by
Because you're using it wrong.
No, I think you mean to say it’s because you’re using it for the wrong use case.
Well this tool has been marketed as if it would handle such use cases.
I don’t think I’ve actually seen any AI marketing that was honest about what it can do.
I personally think image recognition is the best use case as it pretty much does what it promises.
-
[email protected]replied to [email protected] last edited by
If you think of LLMs as something with actual intelligence you're going to be very unimpressed
Artificial sugar is still sugar.
Artificial intelligence implies there is intelligence in some shape or form.
-
[email protected]replied to [email protected] last edited by
There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.
In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.
-
[email protected]replied to [email protected] last edited by
I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in "berry" or they're trying to trip up the model.
-
[email protected]replied to [email protected] last edited by
Same, i was making a pun
-
[email protected]replied to [email protected] last edited by
Something that pretends or looks like intelligence, but actually isn't at all is a perfectly valid interpretation of the word.
-
[email protected]replied to [email protected] last edited by
A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: 'Talking Dog For Sale.'
He rings the bell and the owner appears and tells him the dog is in the backyard.
The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.
"You talk?" he asks.
"Yep" the Lab replies.
After the guy recovers from the shock of hearing a dog talk, he says, "So, what's your story?"
The Lab looks up and says, "Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running... but the jetting around really tired me out, and I knew I wasn't getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I'm just retired."
The guy is amazed. He goes back in and asks the owner what he wants for the dog.
"Ten dollars" the guy says.
"Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?"
"Because he's a liar. He's never been out of the yard."
-
[email protected]replied to [email protected] last edited by
Oh, I see! Apologies.
-
[email protected]replied to [email protected] last edited by
Sure buddy
-
[email protected]replied to [email protected] last edited by
Noted, I'll be giving that a proper read after work. Thank you.
-
[email protected]replied to [email protected] last edited by
Me too. More than once on a language learning subreddit for my first language: "I asked ChatGPT whether this was correct grammar in German, it said no, but I read this counterexample", then everyone correctly responded "why the fuck are you asking ChatGPT about this".
-
[email protected]replied to [email protected] last edited by
Artificial sugar is still sugar.
Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?
Because those are all wrong. What's your criteria?
-
[email protected]replied to [email protected] last edited by
No apologies needed. Enjoy your day and keep the good vibes up!
-
[email protected]replied to [email protected] last edited by
it's not good for summaries. often gets important bits wrong, like embedded instructions that can't be summarized.
-
[email protected]replied to [email protected] last edited by
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account is mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn't as accurate.