DeepSeek's AI breakthrough bypasses industry-standard CUDA, uses assembly-like PTX programming instead
-
[email protected]replied to [email protected] last edited by
There seems to be some confusion here on what PTX is -- it does not bypass the CUDA platform at all. Nor does this diminish NVIDIA's monopoly here. CUDA is a programming environment for NVIDIA GPUs, but many say CUDA to mean the C/C++ extension in CUDA (CUDA can be thought of as a C/C++ dialect here.) PTX is NVIDIA specific, and sits at a similar level as LLVM's IR. If anything, DeepSeek is more dependent on NVIDIA than everyone else, since PTX is tightly dependent on their specific GPUs. Things like ZLUDA (effort to run CUDA code on AMD GPUs) won't work. This is not a feel good story here.
-
[email protected]replied to [email protected] last edited by
Never forget kids the market can stay irrational much longer than you can stay solvent.
-
[email protected]replied to [email protected] last edited by
True.
Thats why I tend to make small plays instead of being an absolute degenerate gambler. -
[email protected]replied to [email protected] last edited by
I don't think anyone is saying CUDA as in the platform, but as in the API for higher level languages like C and C++.
-
[email protected]replied to [email protected] last edited by
Some commenters on this post are clearly not aware of PTX being a part of the CUDA environment. If you know this, you aren't who I'm trying to inform.
-
[email protected]replied to [email protected] last edited by
aah I see them now
-
[email protected]replied to [email protected] last edited by
I wish that was true, but this doesn't threaten any monopoly
-
[email protected]replied to [email protected] last edited by
It certainly does.
Until last week, you absolutely NEEDED an NVidia GPU equipped with CUDA to run all AI models.
Today, that is simply not true. (watch the video at the end of this comment)I watched this video and my initial reaction to this news was validated and then some: this video made me even more bearish on NVDA.
-
[email protected]replied to [email protected] last edited by
This specific tech is, yes, nvidia dependent. The game changer is that a team was able to beat the big players with less than 10 million dollars. They did it by operating at a low level of nvidia's stack, practically machine code. What this team has done, another could do. Building for AMD GPU ISA would be tough but not impossible.
-
[email protected]replied to [email protected] last edited by
mate, that means they are using PTX directly. If anything, they are more dependent to NVIDIA and the CUDA platform than anyone else.
-
[email protected]replied to [email protected] last edited by
you absolutely NEEDED an NVidia GPU equipped with CUDA
-
[email protected]replied to [email protected] last edited by
Ahh. Thanks for this insight.
-
[email protected]replied to [email protected] last edited by
Thanks for the corrections.
-
[email protected]replied to [email protected] last edited by
I thought everyone liked to hate on Metal.
-
[email protected]replied to [email protected] last edited by
It’s written in nvidia instruction set PTX which is part of CUDA ecosystem.
Hardly going to affect nvidia
-
[email protected]replied to [email protected] last edited by
I thought CUDA was NVIDIA-specific too, for a general version you had to use OpenACC or sth.
-
[email protected]replied to [email protected] last edited by
CUDA is NVIDIA proprietary, but may be open to licensing it? I think?
-
[email protected]replied to [email protected] last edited by
The big win I see here is the amount of optimisation they achieved by moving from the high-level CUDA to lower-level PTX. This suggests that developing these models going forward can be made a lot more energy-efficient, something I hope can be extended to their execution as well. As it stands currently, "AI" (read: LLMs and image generation models) consumes way too many resources to be sustainable.
-
[email protected]replied to [email protected] last edited by
It's already happening. This article takes a long look at many of the rising threats to nvidia. Some highlights:
-
Google has been running on their own homemade TPUs (tensor processing units) for years, and say they on the 6th generation of those.
-
Some AI researchers are building an entirely AMD based stack from scratch, essentially writing their own drivers and utilities to make it happen.
-
Cerebras.ai is creating their own AI chips using a unique whole-die system. They make an AI chip the size of entire silicon wafer (30cm square) with 900,000 micro-cores.
So yeah, it's not just "China AI bad" but that the entire market is catching up and innovating around nvidia's monopoly.
-
-
[email protected]replied to [email protected] last edited by
Yeah I'd like to see size comparisons too. The cuda stack is massive.