Reasoning models don't always say what they think.
-
This post did not contain any content.
-
-
This post did not contain any content.
Have they considered that a chain of reasoning can actually change the output? Because that is fed back into the input prompt. That's great for math and logic problems, but I don't think I'd trust the alignment checks.
-
This post did not contain any content.
Because they do not think.
-
This post did not contain any content.
So chain of thought is an awful experiment that doesn't let you know how an AI reasons. Instead of admitting this, AI researchers anthropomorphize yet another test result and turn it into the model hiding their thought process from you. Whatever.
-
Have they considered that a chain of reasoning can actually change the output? Because that is fed back into the input prompt. That's great for math and logic problems, but I don't think I'd trust the alignment checks.
It’s basically using a reference point and they want to make it sound fancier.
-
Because they do not think.
Even people don't always say what they think ... and this applies to the few ones who do.
-
This post did not contain any content.
i like this part :
There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user.
-