Why I am not impressed by A.I.
-
Yeah and you know I always hated this screwdrivers make really bad hammers.
-
That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?
-
I asked Gemini if the quest has an SD slot. It doesn't, but Gemini said it did. Checking the source it was pulling info from the vive user manual
-
You rang?
-
This was an interesting read, thanks for sharing.
-
-
-
Fair enough - sounds like they might not be ready for prime time though.
Oh well, at least while the bugs get ironed-out we're not using them for anything important
-
And apparently, they apparently still can't get an accurate result with such a basic query.
And yet...
https://futurism.com/openai-signs-deal-us-government-nuclear-weapon-security -
-
But to be fair, as people we would not ask "how many Rs does strawberry have", but "with how many Rs do you spell strawberry" or "do you spell strawberry with 1 R or 2 Rs"
-
-
They are not random per se. They are just statistical with just some degree of randomization.
-
Exactly. The naming of the technology would make you assume it's intelligent. It's not.
-
-
-
-
Sure, but I definitely wouldn’t confidently answer “two”.
-
I have thought a lot on it. The LLM per se would not know if the question is answerable or not, as it doesn't know if their output is good of bad.
So there's various approach to this issue:
-
The classic approach, and the one used for censoring: keywords. When the llm gets a certain key word or it can get certain keyword by digesting a text input then give back a hard coded answer. Problem is that while censoring issues are limited. Hard to answer questions are unlimited, hard to hard code all.
-
Self check answers. For everything question the llm could process it 10 times with different seeds. Then analyze the results and see if they are equivalent. If they are not then just answer that it's unsure about the answer. Problem: multiplication of resource usage. For some questions like the one in the post, it's possible than the multiple randomized answers give equivalent results, so it would still have a decent failure rate.
-
-
Why would it not know? It certainly “knows” that it’s an LLM and it presumably “knows” how LLMs work, so it could piece this together if it was capable of self-reflection.