AGI achieved 🤖
-
This post did not contain any content.
How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R
-
By this logic any solid state machine is AI.
These words used to mean things before marketing teams started calling everything they want to sell "AI"
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
-
Obligatory 'lore dump' on the word lollapalooza:
That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.
::: spoiler Note/Rant on the meaning of this term
The current merriam webster and dictionary.com definitions of this term meaning 'an outstanding or exceptional or extreme thing' are wrong, they are too broad.
While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.
When it did not appear as a noun describing a lively, possibly also 'star-studded' or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated... similar to 'that was a real humdinger of a blahblah' or 'that blahblah was a real doozy'... which ties into the effects of having been through the 'raucous party' meaning of lolapalooza.
:::
So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.
An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.
In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...
Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.
::: spoiler racist and nsfw historical example of / evidence for this
https://www.ep.tc/howtospotajap/howto06.html
:::
Now, some people will say this is a total myth, others will say it is not.
My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.
My Grandpa is also another source for what 'lolapalooza' actually means.
wrote on last edited by [email protected] -
LLM wasn’t made for this
There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.
Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
That's modern LLMs in a nutshell.
(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)
Wait, what was going on in August of '96?
-
(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)
Wait, what was going on in August of '96?
Google Search premiered
-
I don't get it
The joke is that it took 14 mins to give that answer
-
ohh god, I never through to ask reasoning models,
DeepSeekR17b was gold too
-
ohh god, I never through to ask reasoning models,
DeepSeekR17b was gold too
then 14b, man sooo close...
-
Now ask how many asses there are in assassinations
It works if you use a reasoning model... but yeah, still ass
-
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
I don't know what part of what I said prompted all those downvotes, but of course all the reasonable people understood, that the "AGI in 2 years" was a stock price pump.
-
I've actually messed with this a bit. The problem is more that it can't count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.
In my experience, when using reasoning models, it can count, but not very consistently. I've tried random assortments of letters and it can count them correctly sometimes. It seems to have much harder time when the same letter repeats many times, perhaps because those are tokenized irregularly.
-
Sorry, that was Claude 3.7, not ChatGPT 4o
-
This post did not contain any content.
Singularity is here
-
You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:
"Tell me more about your cousins," Rorschach sent.
"Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."
"We'd like to know about this tree."
Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."
"Well, it asked for clarification," Bates pointed out.
"It asked a follow-up question. Different thing entirely."
Bates was still out of the loop. Szpindel was starting to get it, though.. .
wrote on last edited by [email protected]Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one book.
One is artificial intelligence (the ship's captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.
-
Sorry, that was Claude 3.7, not ChatGPT 4o
ah, that's reasonable though, considering LLMs don't really "see" characters, it's kind of impressive this works sometimes
-
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
No, the logic for a Pac Man ghost is a solid state machine
Stupid people attributing intelligence to something that is probably not is a shameful hill to die on.
Your god is just an autocomplete bot that you refuse to learn about outside the hype bubble
-
wrote on last edited by [email protected]
I wonder if any of the Axis even bothered to have such a system to check for Americans.
"Bawn-jehr-no"
-
This post did not contain any content.
AI is amazing, we're so fucked.
/s
-
This post did not contain any content.
I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)
-
They don't. They can save information on drives, but searching is expensive and fuzzy search is a mystery.
Just because you can save a mp3 without losing data does not mean you can save the entire Internet in 400gb and search within an instant.
Which is why it doesn’t search within an instant and it uses a bunch of energy and needs to rely on evaporative cooling to stop overheating the servers