Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought
-
you can't trust its explanations as to what it has just done.
I might have had a lucky guess, but this was basically my assumption. You can't ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no 'internal' experience.
Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their 'output voice' as it is to us.
Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.
-
I will never understand how ppl survive without ad blockers. Tried it once recently and it was a horrific experience.
Same way you survive live TV. You learn to mentally block out ads.
-
This is what the ARC-AGI test by Chollet has also shows regarding current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.
Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.
ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.
The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.
Its funny because i approach life with a trial and error method too, not efficient but i get the job done in the end. Always see others who dont and give up like all the people bad at computers who ask the tech support at the company to fix the problem instead of thinking about it for two secs and wonder where life went wrong.
-
I use a calculator. Which an AI should also be and not need to do weird shit to do math.
Yes, you shove it off onto another to do for you instead of doing it yourself and the ai doesnt.
-
To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.
Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.
This is why LLMs are so patchy at math. (Image credit: Anthropic)
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.
But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."
Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)
Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."
Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.
But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.
So it does the math in its head and gives the correct answer and copies the answersheet from the teachers book into the "show your work" section. Pretty much what i would have done as a kid if i could have, instead i had to fight them and take a hit to my score for not showing my work.
-
Same way you survive live TV. You learn to mentally block out ads.
You watch live tv?
-
You watch live tv?
Not anymore, of course. But nothing beats that brain rot apart from sites that hijack your controls.
-
This anecdote has the makings of a "men will literally x instead of going to therapy" joke.
On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing and devaluing humanity as a whole.
But that's unlikely to happen. It's the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third...now they think it's an LLM.
LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into things where they are obviously counterproductive.
Hard agree in every point
-
"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you're gonna say, and then just output the next token necessary to continue that sentence. It's going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that's something I felt was kinda obvious these models must be doing on one level or another.
I'd be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the "thinking" they have already done for previous tokens
well because when you say things like "it plans ahead" or "our method is inspired by brain scanners" etc it makes a connection between AI and real thinking and generates hype.
-
Wow, interesting.
Not unexpectedly, the LLM failed to explain its own thought process correctly.
tbf, how do you know what to say and when? or what 2+2 is?
you learnt it? well so did AI
i'm not an AI nut or anything, but we can barely comprehend our own internal processes, it'd be concerning if a thing humanity created was better at it than us lol
-
The reply was literally "*I* use a calculator" followed by "AI should use one too". Are you suggesting that you're an LLM or how did you cut a piece of cloth for yourself out of that?
Calling someone a fascist for that is obviously a bit OTT but you've ignored the "do weird shit" part of the response so it wasn't literally what you said. Taking the full response into account you can easily interpret it as "I don't bother with mental maths but use a calculator instead, anyone who isn't like me is weird as shit"
That is a bit thought police-y
-
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that...
Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."
wow an AI researcher over hyping his own product. he's just waxing poetic .
we don't even have a good sense of what thought IS, please tell Claude to call the philosophers because apparently he's figured out consciousness
-
Anybody who claims they don't "think" before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.
You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.
shouldn't you say the inverse is true lol why call it thinking if we don't know what thinking is or what it's doing?
why are you cool with pro ai and against anti ai sentiments? either way it's a value judgment, quit acting like yours is the correct opinion
-
Maybe? Didn't seem like a sales job at the time, more like a warning. You could be right though.
they post articles like that all the time. warnings make great clickbait
-
shouldn't you say the inverse is true lol why call it thinking if we don't know what thinking is or what it's doing?
why are you cool with pro ai and against anti ai sentiments? either way it's a value judgment, quit acting like yours is the correct opinion
I wasn't calling it thinking. I'm saying people claiming it's not is just jumping the gun. It's also funny you're simply claiming I am pro AI without needing any proof. This is what I meant when I said people who are anti-AI should strive to be better than the AI they criticise. Acting based on non-facts makes you no better than AI with their hallucinations.
Its also funny that you're calling me out when I'm just mirroring what the other guy is doing to make a point. He's acting like his is the correct opinion, yet you only calling me out because the guy is on your side of the argument. That's simply a bad faith argument on your part.
-
I wasn't calling it thinking. I'm saying people claiming it's not is just jumping the gun. It's also funny you're simply claiming I am pro AI without needing any proof. This is what I meant when I said people who are anti-AI should strive to be better than the AI they criticise. Acting based on non-facts makes you no better than AI with their hallucinations.
Its also funny that you're calling me out when I'm just mirroring what the other guy is doing to make a point. He's acting like his is the correct opinion, yet you only calling me out because the guy is on your side of the argument. That's simply a bad faith argument on your part.
I see the misunderstanding, sorry. You're still in the wrong though. while you weren't calling it thinking, the article certainly was. THAT'S why we're saying it's not. we're doing what you said we should, but it's the inverse, and you call it anti-AI. the jackass who wrote that article is jumping the gun and we're saying "how tf can you call it thinking" and i see your reply calling that anti AI, seems like a reasonable mistake ye?
-
This post did not contain any content.
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.
But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
-
Calling someone a fascist for that is obviously a bit OTT but you've ignored the "do weird shit" part of the response so it wasn't literally what you said. Taking the full response into account you can easily interpret it as "I don't bother with mental maths but use a calculator instead, anyone who isn't like me is weird as shit"
That is a bit thought police-y
I didn't ignore it, I just interpret it differently as in, "I don't need to do this unusual stuff that people do without a calculator". Calling something weird doesn't necessarily mean it's off-color or that it's even a trait of the person. In my use case, weird just means unexpected or counterintuitive, and maybe complex enough that the person doesn't bother with describing it properly. I know because I use it that way too. Weird doesn't have to mean a third eye on your face every time.
I do want to mention that it's not the first time I see a visceral reaction to a passing comment. I usually see this from marginalized groups, and I can assure you, both Kolanki and I are part of those too. And knowing his long comment history, I sincerely doubt his comment meant weird as in you're weird as shit.
-
Calling someone a fascist for that is obviously a bit OTT but you've ignored the "do weird shit" part of the response so it wasn't literally what you said. Taking the full response into account you can easily interpret it as "I don't bother with mental maths but use a calculator instead, anyone who isn't like me is weird as shit"
That is a bit thought police-y
Except as you demonstrated, it requires quite a few leaps of interpretation, assuming the worst interpretations of OP's statement, which is why it's silly. OP clearly limited their statement to themselves and AI.
Now if OP said, "everyone should use a calculator or die", maybe then it would have been a valid response.
-
I see the misunderstanding, sorry. You're still in the wrong though. while you weren't calling it thinking, the article certainly was. THAT'S why we're saying it's not. we're doing what you said we should, but it's the inverse, and you call it anti-AI. the jackass who wrote that article is jumping the gun and we're saying "how tf can you call it thinking" and i see your reply calling that anti AI, seems like a reasonable mistake ye?
But the comment I replied to didn't just deny the confirmation that AI is thinking, it also denied that AI "thinks" at all. That puts him in a position of making an unproven claim. In fact, he is directly making that claim, while the article he is denying only alludes to saying that LLM "thinks" like a human. That makes his unproven claim even more egregious than the article's.