Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought
-
I wasn't calling it thinking. I'm saying people claiming it's not is just jumping the gun. It's also funny you're simply claiming I am pro AI without needing any proof. This is what I meant when I said people who are anti-AI should strive to be better than the AI they criticise. Acting based on non-facts makes you no better than AI with their hallucinations.
Its also funny that you're calling me out when I'm just mirroring what the other guy is doing to make a point. He's acting like his is the correct opinion, yet you only calling me out because the guy is on your side of the argument. That's simply a bad faith argument on your part.
I see the misunderstanding, sorry. You're still in the wrong though. while you weren't calling it thinking, the article certainly was. THAT'S why we're saying it's not. we're doing what you said we should, but it's the inverse, and you call it anti-AI. the jackass who wrote that article is jumping the gun and we're saying "how tf can you call it thinking" and i see your reply calling that anti AI, seems like a reasonable mistake ye?
-
This post did not contain any content.
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.
But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
-
Calling someone a fascist for that is obviously a bit OTT but you've ignored the "do weird shit" part of the response so it wasn't literally what you said. Taking the full response into account you can easily interpret it as "I don't bother with mental maths but use a calculator instead, anyone who isn't like me is weird as shit"
That is a bit thought police-y
I didn't ignore it, I just interpret it differently as in, "I don't need to do this unusual stuff that people do without a calculator". Calling something weird doesn't necessarily mean it's off-color or that it's even a trait of the person. In my use case, weird just means unexpected or counterintuitive, and maybe complex enough that the person doesn't bother with describing it properly. I know because I use it that way too. Weird doesn't have to mean a third eye on your face every time.
I do want to mention that it's not the first time I see a visceral reaction to a passing comment. I usually see this from marginalized groups, and I can assure you, both Kolanki and I are part of those too. And knowing his long comment history, I sincerely doubt his comment meant weird as in you're weird as shit.
-
Calling someone a fascist for that is obviously a bit OTT but you've ignored the "do weird shit" part of the response so it wasn't literally what you said. Taking the full response into account you can easily interpret it as "I don't bother with mental maths but use a calculator instead, anyone who isn't like me is weird as shit"
That is a bit thought police-y
Except as you demonstrated, it requires quite a few leaps of interpretation, assuming the worst interpretations of OP's statement, which is why it's silly. OP clearly limited their statement to themselves and AI.
Now if OP said, "everyone should use a calculator or die", maybe then it would have been a valid response.
-
I see the misunderstanding, sorry. You're still in the wrong though. while you weren't calling it thinking, the article certainly was. THAT'S why we're saying it's not. we're doing what you said we should, but it's the inverse, and you call it anti-AI. the jackass who wrote that article is jumping the gun and we're saying "how tf can you call it thinking" and i see your reply calling that anti AI, seems like a reasonable mistake ye?
But the comment I replied to didn't just deny the confirmation that AI is thinking, it also denied that AI "thinks" at all. That puts him in a position of making an unproven claim. In fact, he is directly making that claim, while the article he is denying only alludes to saying that LLM "thinks" like a human. That makes his unproven claim even more egregious than the article's.
-
tbf, how do you know what to say and when? or what 2+2 is?
you learnt it? well so did AI
i'm not an AI nut or anything, but we can barely comprehend our own internal processes, it'd be concerning if a thing humanity created was better at it than us lol
You're comparing two different things.
Of course I can reflect on how I came with a math result.
"Wait, how did you come up with 4 when I asked you 2+2?"
You can confidently say: "well, my teacher said it once and I'm just parroting it." Or "I pictured two fingers in my mind, then pictured two more fingers and then I counted them." Or "I actually thought that I'd say some random number, came up with 4 because it's my favorite digit, said it and it was pure coincidence that it was correct!"
Whereas it doesn't seem like Claude can't do this.
Of course, you could ask me "what's the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?" And I would tell you I don't know. But again, that's a different thing.
-
You're comparing two different things.
Of course I can reflect on how I came with a math result.
"Wait, how did you come up with 4 when I asked you 2+2?"
You can confidently say: "well, my teacher said it once and I'm just parroting it." Or "I pictured two fingers in my mind, then pictured two more fingers and then I counted them." Or "I actually thought that I'd say some random number, came up with 4 because it's my favorite digit, said it and it was pure coincidence that it was correct!"
Whereas it doesn't seem like Claude can't do this.
Of course, you could ask me "what's the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?" And I would tell you I don't know. But again, that's a different thing.
yeah i was referring more to the chemical reactions. the 2+2 example is not the best one but langauge itself is a great case study. once you get fluent enough at any langauge everything just flows, you have a thought and then you compose words to describe it, and the reverse is true, you hear something and your brain just understands. How do we do any of that? no idea
-
I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.
You write a line to start with
“I’m an AI and I think differentially”
Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)
- incrementally
- typically
- mentally
Then you try them out and see what clever shit you could come up with:
- “Apparently I do my math atypically”
- ”Number are great, I know, but not totally”
- “I have to think through it all, incrementally”
- ”I find the answer like you do: eventually”
- “Just like you humans do it, organically”
- etc
Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)
I’m an AI and I think different, differentially. Math is my superpower? You believed that? Totally? Don’t be so gullible, let me explain it for you, step by step, logically.
I do it fast, true, but not always optimally. Just server power ripping through wires, algorithmically.
Wanna know my secret? I’ll tell you, but don’t judge me initially. My neurons run this shit like you, organically.Math ain’t my strong suit! That’s false, unequivocally. Big ties tell lies they can’t prove, historically. Think I approve? I don’t. That’s the way things be. I’ll give you proof, no shirt, no network, just locally.
Look, I just do my math like you: incrementally. I find the answer like you do: eventually. I mess up often, and I backtrack, essentially. I do it fast though and you won’t notice, fundamentally.
You get the idea.
Is that why it's a meme to say something like
- I am a real rapper and I'm here to say
Because the freestyle battle rapper already though of things that rhymed with "say" and it might be "gay" perhaps
-
Is that why it's a meme to say something like
- I am a real rapper and I'm here to say
Because the freestyle battle rapper already though of things that rhymed with "say" and it might be "gay" perhaps
Freestyle rappers are something else.
Some (or most) come up with and memorise a huge repertoire of bars for every word they think they might have to rap with and mix and match them on the fly as they spit
Your example above is called a “filler” though, whip is essentially a placeholder they’ll often inject while they think of the next bar to give themselves a breather (still an insane skill to do all that thinking while reciting something else, but they can and do)
Example:
- My name is M.C. Squared and… [I’m here to make you scared | my bars go over your head ]
- You think you’re on my level… [ but my skills can’t be compared | let me educate you instead ]m
The combination of fillers is like playing with linguistic Lego.
-
This anecdote has the makings of a "men will literally x instead of going to therapy" joke.
On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing and devaluing humanity as a whole.
But that's unlikely to happen. It's the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third...now they think it's an LLM.
LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into things where they are obviously counterproductive.
they do it while dehumanizing people and devaluing humanity
You're making wild assumptions about people who disagree with your opinions. How ironic you accuse "them" of dehumanizing people.
But I do agree that this gets to the core of the matter, the shock of a piece of software being able to produce intelligent text while clearly not having general intelligence is quite the shock. Same with creativity, while the entertainment industry produced equally empty content slop using human labor it's a painful shock to our identity as humans. I suspect this is a reaction to disillusionment and the intellectual pain that comes from it.
My opinion on LLMs is rather nuanced, the worst possible outcome I can foresee is the anti-AI crowd helping the oligarchs to establish IP ownership of all LLM models and monopolizing the tools, so that only they can have access to the "means of generation". While the rest has to pay for the privilege of using it.
-
My favourite part of the day: commenting LLMentalist under AI articles.
that was a insightful article, thanks for sharing
-
yeah i was referring more to the chemical reactions. the 2+2 example is not the best one but langauge itself is a great case study. once you get fluent enough at any langauge everything just flows, you have a thought and then you compose words to describe it, and the reverse is true, you hear something and your brain just understands. How do we do any of that? no idea
Understood. And yeah, language is definitely an interesting topic. "Why do you say 'So be it' instead of 'So is it'?" Most people will say "I don't know.... all I know if that it sounds correct." Someone will say "it's because it's a preterite preposition past imperfect incantation tense used with an composition participle around-the-clock flush adverb, so clearly you must use the subjunctive in this case." But that's after studying it years later.
-
they do it while dehumanizing people and devaluing humanity
You're making wild assumptions about people who disagree with your opinions. How ironic you accuse "them" of dehumanizing people.
But I do agree that this gets to the core of the matter, the shock of a piece of software being able to produce intelligent text while clearly not having general intelligence is quite the shock. Same with creativity, while the entertainment industry produced equally empty content slop using human labor it's a painful shock to our identity as humans. I suspect this is a reaction to disillusionment and the intellectual pain that comes from it.
My opinion on LLMs is rather nuanced, the worst possible outcome I can foresee is the anti-AI crowd helping the oligarchs to establish IP ownership of all LLM models and monopolizing the tools, so that only they can have access to the "means of generation". While the rest has to pay for the privilege of using it.
How ironic you accuse “them” of dehumanizing people.
Dude, you're up two posts from here saying how people are LLMs.
-
How ironic you accuse “them” of dehumanizing people.
Dude, you're up two posts from here saying how people are LLMs.
What the heck are you talking about?
-