Why I am not impressed by A.I.
-
i'm still not entirely sold on them but since i'm currently using one that the company subscribes to i can give a quick opinion:
i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that
a + b == b + a
). so i asked about this, and the llm created some boilerplate to test this code. i've been chatting with it for about half an hour, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i've now run into an issue where the__eq
metamethod isn't firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.so in 30 minutes i've gone from a loose idea to a well-documented proof-of-concept to a roadblock that can't really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.
-
That makes sense as long as you're not writing code that needs to know how to do something as complex as ...checks original post... count.
-
-
That depends on how you use it. If you need the information from an article, but don't want to read it, I agree, an LLM is probably the wrong tool. If you have several articles and want go decide which one has the information you need, an LLM is a pretty good option.
-
Most of what I'm asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it's easy to spot a hallucination, but the pieces that I don't already know are easy to Google to verify.
Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)
-
I know right? It's not a fruit it's a vegetable!
-
I've already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it's not a primary source for anything I just get that blank stare like I have two heads.
-
Just playing, friend.
-
Not mental acrobatics, just common sense.
-
That's a very different problem than the one in the OP
-
I use ai like that except im not using the same shit everyone else is on. I use a dolphin fine tuned model with tool use hooked up to an embedder and searxng. Every claim it makes is sourced.
-
Correct.
-
Yeah, I don't get why so many people seem to not get that.
The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.
I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.
I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.
-
-
-
-
I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in "berry" or they're trying to trip up the model.
-
Same, i was making a pun
-
Something that pretends or looks like intelligence, but actually isn't at all is a perfectly valid interpretation of the word.
-