Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought
-
This post did not contain any content.
The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.
-
Rote memorization should be minimized in school curriculum
Memory can improve with training, and it's useful in a large number of contexts. My major beef with rote memorization in schools is that it's usually made to be excruciatingly boring. I'd say that's the bigger problem.
-
I do much the same in my head.
Know what's crazy? We sling bags of mulch, dirt and rocks onto customer vehicles every day. No one, neither coworkers nor customers, will do simple multiplication. Only the most advanced workers do it. No lie.
Customer wants 30 bags of mulch. I look at the given space:
"Let's do 6 stacks of 5."
Everyone proceeds to sling shit around in random piles and count as we go. And then someone loses track and has to shift shit around to check the count.
Yeah, one of my family members is a bricklayer and he can work out a bill of materials in his head based on the dimensions in an architectural plan: given these dimensions and this thickness of mortar joint, I'll need this many bricks, this many bags of mortar, this many bags of sand, this many hours of labor, etc. It's just addition and multiplication, but his colleagues regard him as a freak. And when he first started doing it, if you'd ask him to break down his reasoning, he'd find that difficult.
-
This post did not contain any content.
Wow, interesting.
Not unexpectedly, the LLM failed to explain its own thought process correctly.
-
But how is this different from your average redditor?
Redditor as "a person active on Reddit"? I don't see where I was talking about humans. Or am I misunderstanding the question?
-
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
This is not surprising. LLMs are not designed to have any introspection capabilities.
Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody's done it yet. It will be interesting to see how that might change LLM behavior.
-
I think what's wild about it is that it really is surprisingly similar to how we actually think. It's very different from how a computer (calculator) would calculate it.
So it's not a strange method for humans but that's what makes it so fascinating, no?
I mean neural networks are modeled after biological neurons/brains after all. Kind of makes sense...
-
I think it's odd in the sense that it's supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do
At least that's my takeaway
This is what the ARC-AGI test by Chollet has also shows regarding current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.
Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.
ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.
The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.
-
This post did not contain any content.
-
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
If the llm already knows the full sentence it's going to output from the first word it "guesses" I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.
I don't think it knows the full sentence, it just doesn't search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.
-
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
If the llm already knows the full sentence it's going to output from the first word it "guesses" I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.
interestingly, too, this is a technique when you're improvising songs, it's called Target Rhyming.
The most effective way is to do A / B^1 / C / B^2 rhymes. You pick the B^2 rhyme, let's say, "ibruprofen" and you get all of A and B^1 to think of a rhyme
Oh its Christmas time
And I was up on my roof when
I heard a jolly old voice
Ask me for ibuprofenAnd the audience thinks you're fucking incredible for complex rhymes.
-
This post did not contain any content.
you can't trust its explanations as to what it has just done.
I might have had a lucky guess, but this was basically my assumption. You can't ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no 'internal' experience.
Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their 'output voice' as it is to us.
-
'is weirder than you thought '
I am as likely to click a link with that line as much as if it had
'this one weird trick' or 'side hussle'.
I would really like it if headlines treated us like adults and got rid of click baity lines.
They do it because it works on the whole. If straight titles were as effective they'd be used instead.
-
This post did not contain any content.
Don't tell me that my thoughts aren't weird enough.
-
They do it because it works on the whole. If straight titles were as effective they'd be used instead.
The one weird trick that makes clickbait work
-
'is weirder than you thought '
I am as likely to click a link with that line as much as if it had
'this one weird trick' or 'side hussle'.
I would really like it if headlines treated us like adults and got rid of click baity lines.
But then you wouldn't need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll
-
They do it because it works on the whole. If straight titles were as effective they'd be used instead.
-
This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.
The problem with common core math isn’t that rounding is inherently bad, it’s that you don’t start with that as a framework.
-
I might. Then I can subtract 74 to get 74*14, and subtract 28 to get 72*13.
I don't generally do that to 'weird' numbers, I usually get closer to multiples of 5, 9, 10, or 11.
But a computer stores information differently. Perhaps it moves closer to numbers with simpler binary addresses.