Well, you still need the right kind of hardware to run it, and my money has been on AMD to deliver the solutions for that.
-
[email protected]replied to [email protected] last edited by
And you are basically a single consumer with a personal car relative to those data centers and cloud computing providers.
YOUR workload works well with an FPGA. Good for you, take advantage of that to the best degree you can.
People;/companies who want to run newer models that haven't been optimized for/don't support FPGAs? You get back to the case of "Well... I can run a 25% cheaper node for twice as long?". That isn't to say that people shouldn't be running these numbers (most companies WOULD benefit from the cheaper nodes for 24/7 jobs and the like). But your use case is not everyone's use case.
And, it once again, boils down to: If people are going to require the latest and greatest nvidia, what incentive is there in spending significant amounts of money getting it to work on a five year old AMD? Which is where smaller businesses and researchers looking for a buyout come into play.
At the end of the day, every company will choose to do it faster and cheaper, and nothing about Nvidia hardware fits into either of those categories unless you’re talking about milliseconds of timing, which THEN only fits into a mold of OpenAI’s definition.
Faster is almost always cheaper. There have been decades of research into this and it almost always boils down to it being cheaper to just run at full speed (if you have the ability to) and then turn it off rather than run it longer but at a lower clock speed or with fewer transistors.
And nVidia wouldn't even let the word "cheaper" see the glory that is Jensen's latest jacket that costs more than my car does. But if you are somehow claiming that "faster" doesn't apply to that company then... you know nothing (... Jon Snow).
unless you’re talking about milliseconds of timing
So... its not faster unless you are talking about time?
Also, milliseconds really DO matter when you are trying to make something responsive and already dealing with round trip times with a client. And they add up quite a bit when you are trying to lower your overall footprint so that you only need 4 notes instead of 5.
They don't ALWAYS add up, depending on your use case. But for the data centers that are selling computers by time? Yeah,. time matters.
So I will just repeat this: Your use case is not everyone's use case.
-
[email protected]replied to [email protected] last edited by
I remember Xilinx from way back in the 90s when I was taking my EE degree, so they were hardly a fledgling in 2019.
Not disputing your overall point, just that detail because it stood out for me since Xilinx is a name I remember well, mostly because it's unusual.
-
[email protected]replied to [email protected] last edited by
They were kind of pioneering the space, but about to collapse. AMD did good by scooping them up.
-
[email protected]replied to [email protected] last edited by
I mean...I can shut this down pretty simply. Nvidia makes GPUs that are currently used as a blunt force tool, which is dumb, and now that the grift has been blown, OpenAI, Anthropic, Meta, and all the others trying to make a business center around a really simple tooling that is open source, are about to be under so much scrutiny for the cost that everyone will figure out that there are cheaper ways to do this.
Plus AMD, Con Nvidia. It's really simple.
-
[email protected]replied to [email protected] last edited by
Ah. Apologies for trying to have a technical conversation with you.
-
[email protected]replied to [email protected] last edited by
FPGAs have been a thing for ages.
If I remember it correctly (I learned this stuff 3 decades ago) they were basically an improvement on logic circuits without clocks (think stuff like NAND and XOR gates - digital signals just go in and the result comes out on the other side with no delay beyond that caused by analog elements such as parasitical inductances and capacitances, so without waiting for a clock transition).
The thing is, back then clocking of digital circuits really took off (because it's WAY simpler to have things done one stage at a time with a clock synchronizing when results are read from one stage and sent to the next stage, since different gates have different delays and so making sure results are only read after the slowest path is done is complicated) so all CPU and GPU architecture nowadays are based on having a clock, with clock transitions dictating things like when is each step of processing a CPU/GPU instruction started.
Circuits without clocks have the capability of being way faster than circuits with clocks if you can manage the problem of different digital elements having different delays in producing results I think what we're seeing here is a revival of using circuits without clocks (or at least with blocks of logic done between clock transitions which are much longer and more complex than the processing of a single GPU instruction).
-
[email protected]replied to [email protected] last edited by
Yes, but I'm not sure what your argument is here.
Least resistance to an outcome (in this case whatever you program it to do) is faster.
Applicable to waterfall flows, FPGA makes absolute sense for the neural networks as they operate now.
I'm confused on your argument against this and why GPU is better. The benchmarks are out in the world, go look them up.
-
[email protected]replied to [email protected] last edited by
I have you an explanation, and how it is used and perceived. You can ignore that all day long, but point is still valid
-
[email protected]replied to [email protected] last edited by
What "point"?
Your "point" was "Well I don't need it" while ignoring that I was referring to the market as a whole. And then you went on some Team Red rant because apparently AMD is YOUR friend or whatever.
-
[email protected]replied to [email protected] last edited by
I'm not making an argument against it, just clarifying were it sits as technology.
As I see it, it's like electric cars - a technology that was overtaken by something else in the early days when that domain was starting even though it was the first to come out (the first cars were electric and the ICE engine was invented later) and which has now a chance to be successful again because many other things have changed in the meanwhile and we're a lot closes to the limits of the tech that did got widely adopted back in the early days.
It actually makes a lot of sense to improve the speed of what programming can do by getting it to be capable of also work outside the step-by-step instruction execution straight-jacked which is the CPU/GPU clock.