DeepSeek might not be such good news for energy after all
-
And here I thought that the energy consumption was in the training.
-
The FUD is hilarious. Even an llm would tell you the article compares apples and oranges... FFS.
-
Everyone is making way too much money off of this for a blanket ban to ever happen.
-
It's more like comparing them while they use the same fuel (as the article directly compares them in joules): Let's say the train also uses gasoline. The car is a far more "independent", controllable, and "doesn't waste fuel driving to places you don't want to go" and thus seen as "better" and more appealing, but that wide appeal and thus wide usage creates far more demand for gasoline, dries up the planet, and clogs up the streets, wasting fuel idling at traffic stops.
-
Longer!=Detailed
Generally what they're calling out is that DeepSeek currently rambles more. With LLMs the challenge is how to get the right answer most sussinctly because each extra word is a lot of time/money.
That being said, I suspect that really it's all roughly the same. We've been seeing this back and forth with LLMs for a while and DeepSeek, while using a different approach, doesn't really break the mold.
-
Yeah, I was thinking diesel powered trains
-
The AI models use the same fuel for energy.
-
Yes, sorry, where I live it's pretty normal for cars to be diesel powered. I agree with you!
-
This is more about the "reasoning" aspect of the model where it outputs a bunch of "thinking" before the actual result. In a lot of cases it easily adds 2-3x onto the number of tokens needed to be generated. This isn't really useful output. It the model getting into a state where it can better respond.
-
A bit flawed. What if the same prompts are used but both models are required to keep their responses equally brief?