Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought
-
This post did not contain any content.
Someone put 69 to research and then to article. Nice trolling.
-
How I'd do it is basically
72 * (10+3)
(72 * 10) + (72 * 3)
(720) + (3*(70+2))
(720) + (210+6)
(720) + (216)
936
Basically I break the numbers apart into easier chunks and then add them together.
-
Fascist. If someone does maths differently than your preference, it's not "weird shit". I'm facile with mental math despite what's perhaps a non-standard approach, and it's quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.
-
Fascist. If someone does maths differently than your preference, it's not "weird shit". I'm facile with mental math despite what's perhaps a non-standard approach, and it's quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.
I am talking about the AI. It's already a computer. It shouldn't need to do anything other than calculate the equations. It doesn't have a brain, it doesn't think like a human, so it shouldn't need any special tools or ways to help it do math. It is a calculator, after all.
-
I wouldn't even attempt that in my head.
I can't keep track of things and then recall them later for the final result. -
anything that claims it "thinks" in any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way "thinking" as a sentient mind does.
Anybody who claims they don't "think" before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.
You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.
-
This reminds me of learning a shortcut in math class but also knowing that the lesson didn't cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal's Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM's training makes it easier to use heuristics. That's actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
You're antropomorphising quite a bit there. It is not trying to be deceptive, it's building two mostly unrelated pieces of text and deciding the fuzzy logic is getting it the most likely valid response once and that the description of the algorithm is the most likely response to the other. As far as I can tell there's neither a reward for lying about the process nor any awareness of what the process was anywhere in this.
Still interesting (but unsurprising) that it's not getting there by doing actual maths, though.
-
"Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains."
That is precisrly how I do math. Feel a little targeted that they called this odd.
I think it's odd in the sense that it's supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do
At least that's my takeaway
-
Which is exactly how we do it.
We also check to see if the word that popped into our heads actually rhymes by saying it out loud. Actual validation steps we can take is a bigger difference than being a little more robust.
We also have non-list based methods like breaking the word down into smaller chunks to try to build up hopefully more novel rhymes. I imagine professionals have even more tools, given the complexity of more modern rhyme schemes.
-
I think what's wild about it is that it really is surprisingly similar to how we actually think. It's very different from how a computer (calculator) would calculate it.
So it's not a strange method for humans but that's what makes it so fascinating, no?
Yes, agreed. And calculators are essentially tabulators, and operate almost just like a skilled person using an abacus.
We shouldn't really be surprised because we designed these machines and programs based on our own human experiences and prior solutions to problems. It's still neat though.
-
This post did not contain any content.
-
To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.
Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.
This is why LLMs are so patchy at math. (Image credit: Anthropic)
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.
But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."
Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)
Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."
Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.
But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.
My favourite part of the day: commenting LLMentalist under AI articles.
-
Unfortunately, these articles are often written by people who don't know enough to realize they're missing important nuances.
It also doesn't help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. "thinks" in "conceptual spaces" is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.
On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)
-
I wouldn't even attempt that in my head.
I can't keep track of things and then recall them later for the final result.Pen and paper maths I'm pretty decent at, but ask me to calculate anything in my head and it's anyone's guess if I remembered to carry the 1 or not. Ever since learning about aphantasia I'm wondering if the lack of being able to visually store values has something to do with it.
-
Times 5 and times 10 tables are really easy for me. So yeah, in my mind it's an easier comuptation.
That being said having a result of a little over a 1000 gives me an estimate for the magnitude of a number – it's around a thousand. It might be more or less but it's not far from there.
-
(72 * 10) + (2 * 3) = x
There, fixed, because otherwise order of operation gets fucky.
No it doesn't, multiplication and division always take precedence over addition and subtraction. You'd need parentheses to clarify what is in the divisor since that can be ambiguous with line notation.
-
To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.
Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.
This is why LLMs are so patchy at math. (Image credit: Anthropic)
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.
But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."
Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)
Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."
Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.
But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.
"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you're gonna say, and then just output the next token necessary to continue that sentence. It's going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that's something I felt was kinda obvious these models must be doing on one level or another.
I'd be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the "thinking" they have already done for previous tokens
-
Pen and paper maths I'm pretty decent at, but ask me to calculate anything in my head and it's anyone's guess if I remembered to carry the 1 or not. Ever since learning about aphantasia I'm wondering if the lack of being able to visually store values has something to do with it.
Ever since learning about aphantasia I’m wondering if the lack of being able to visually store values has something to do with it.
Here's some anecdotal evidence. Until I was 12 or 13, I could do absurdly complex arithmetical calculations in my head. My memory of it was of visualizing intermediate calculations as if they were on a screen in my head. I'd close my eyes to minimize distracting external stimuli. I'd get pocket money because my dad would get his friends to bet on whether I could correctly multiply two 7-digit phone numbers, and when I won, which I always did, he'd give the money to me. He had an old-school electromechanical calculator he'd use to check the results.
I was able to use a similar visualization technique to memorize long passages of music and text. That stayed with me post-puberty, though again at a lesser extent.
Once puberty kicked in, my ability to visualize declined significantly, though I also learned some mental arithmetics tricks that I still use now. I was able to get an MS in mathematics without much effort, since that relied on higher-level reasoning and not all that much on powerful memory or visualization.
So I think your comment about aphantasia is at least directionally correct, as applied to people. But there's little reason to assume LLMs would do things the same way a human mind does, though both might operate under similar information-theoretic constraints.
-
This post did not contain any content.
The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.
-
Rote memorization should be minimized in school curriculum
Memory can improve with training, and it's useful in a large number of contexts. My major beef with rote memorization in schools is that it's usually made to be excruciatingly boring. I'd say that's the bigger problem.