I'm actually more medium on this!
-
wrote on last edited by [email protected]
I'm actually more medium on this!
-
Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.
-
No 32B base model. Is that a middle finger to the Deepseek distils?
-
It really feels like "more of qwen 2.5/1.5" architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation... something new other than some training data optimizations and more scale.
-
-
I'm actually more medium on this!
-
Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.
-
No 32B base model. Is that a middle finger to the Deepseek distils?
-
It really feels like "more of qwen 2.5/1.5" architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation... something new other than some training data optimizations and more scale.
There actually is a 32b dense
-
-
There actually is a 32b dense
Yeah, but only an Instruct version. They didn't leave any 32B base model like they did for for the 30B MoE.
That could be intentional, to stop anyone from building on their 32B dense model.
-
Yeah, but only an Instruct version. They didn't leave any 32B base model like they did for for the 30B MoE.
That could be intentional, to stop anyone from building on their 32B dense model.
Huh. I didn't realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.
-
Huh. I didn't realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.
It could be an oversight, no one has answered yet. Not many people asking either, heh.