Why I am not impressed by A.I.
-
[email protected]replied to [email protected] last edited by
i'm still not entirely sold on them but since i'm currently using one that the company subscribes to i can give a quick opinion:
i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that
a + b == b + a
). so i asked about this, and the llm created some boilerplate to test this code. i've been chatting with it for about half an hour, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i've now run into an issue where the__eq
metamethod isn't firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.so in 30 minutes i've gone from a loose idea to a well-documented proof-of-concept to a roadblock that can't really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.
-
[email protected]replied to [email protected] last edited by
That makes sense as long as you're not writing code that needs to know how to do something as complex as ...checks original post... count.
-
[email protected]replied to [email protected] last edited by
That depends on how you use it. If you need the information from an article, but don't want to read it, I agree, an LLM is probably the wrong tool. If you have several articles and want go decide which one has the information you need, an LLM is a pretty good option.
-
[email protected]replied to [email protected] last edited by
Most of what I'm asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it's easy to spot a hallucination, but the pieces that I don't already know are easy to Google to verify.
Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)
-
[email protected]replied to [email protected] last edited by
I know right? It's not a fruit it's a vegetable!
-
[email protected]replied to [email protected] last edited by
I've already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it's not a primary source for anything I just get that blank stare like I have two heads.
-
[email protected]replied to [email protected] last edited by
Just playing, friend.
-
[email protected]replied to [email protected] last edited by
Not mental acrobatics, just common sense.
-
[email protected]replied to [email protected] last edited by
That's a very different problem than the one in the OP
-
[email protected]replied to [email protected] last edited by
I use ai like that except im not using the same shit everyone else is on. I use a dolphin fine tuned model with tool use hooked up to an embedder and searxng. Every claim it makes is sourced.
-
[email protected]replied to [email protected] last edited by
Correct.
-
[email protected]replied to [email protected] last edited by
Yeah, I don't get why so many people seem to not get that.
The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.
I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.
I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.
-
[email protected]replied to [email protected] last edited by
Because you're using it wrong.
No, I think you mean to say it’s because you’re using it for the wrong use case.
Well this tool has been marketed as if it would handle such use cases.
I don’t think I’ve actually seen any AI marketing that was honest about what it can do.
I personally think image recognition is the best use case as it pretty much does what it promises.
-
[email protected]replied to [email protected] last edited by
If you think of LLMs as something with actual intelligence you're going to be very unimpressed
Artificial sugar is still sugar.
Artificial intelligence implies there is intelligence in some shape or form.
-
[email protected]replied to [email protected] last edited by
There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.
In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.
-
[email protected]replied to [email protected] last edited by
I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in "berry" or they're trying to trip up the model.
-
[email protected]replied to [email protected] last edited by
Same, i was making a pun
-
[email protected]replied to [email protected] last edited by
Something that pretends or looks like intelligence, but actually isn't at all is a perfectly valid interpretation of the word.
-
[email protected]replied to [email protected] last edited by
A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: 'Talking Dog For Sale.'
He rings the bell and the owner appears and tells him the dog is in the backyard.
The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.
"You talk?" he asks.
"Yep" the Lab replies.
After the guy recovers from the shock of hearing a dog talk, he says, "So, what's your story?"
The Lab looks up and says, "Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running... but the jetting around really tired me out, and I knew I wasn't getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I'm just retired."
The guy is amazed. He goes back in and asks the owner what he wants for the dog.
"Ten dollars" the guy says.
"Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?"
"Because he's a liar. He's never been out of the yard."