AGI achieved 🤖
-
Interesting….troubleshooting is going to be interesting in the future
-
interesting
wrote on last edited by [email protected]I'm not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that's how the information is stored. So LLM never sees the actual word.
-
The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end
Ah a fellow stanley parable enjoyer, love to see it!
*the end is never the end is never the end
-
I'm not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that's how the information is stored. So LLM never sees the actual word.
Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.
This is key because, I can't emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.
-
Agi lost
Henceforth, AGI should be called "almost general intelligence"
-
Biggest threat to humanity
I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy
-
Henceforth, AGI should be called "almost general intelligence"
Happy cake day
-
This post did not contain any content.
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
-
Happy cake day
Thanks! Time really flies.
-
This post did not contain any content.
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
-
I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy
And if it messed up on the other word, we could say because it’s pronounced Louisianer.
-
Next step how many r in Lollapalooza
-
wrote on last edited by [email protected]
Try it with o3 maybe it needs time to think
-
I don't get it
-
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
True and I agree with you yet we are being told all job are going to disappear, AGI is coming tomorrow, etc. As usual the truth is more balanced
-
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
I've actually messed with this a bit. The problem is more that it can't count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.
-
Next step how many r in Lollapalooza
Apparently, this robot is japanese.
-
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
Fair point, but a big part of "intelligence" tasks are memorization.
-
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit
-
Fair point, but a big part of "intelligence" tasks are memorization.
Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what “AI” has shown to be pretty shit at