Apple reveals M3 Ultra, taking Apple silicon to a new extreme
-
You pick neither, and enforce correct usage of both in advertised products.
-
512 GiB is half a tebibyte. 512 GB is just under 477 GiB.
-
Honestly, the base level M1 mini is still one hell of a computer. I'm typing this on one right now, complete with only 8gb RAM, and it hasn't yet felt in any way underpowered.
Encoded some flac files to m4a with XLD this morning. 16 files totalling 450mb; it took 10 seconds to complete. With my work flows I can't imagine needing much more power than that.
-
Yup.
- 512 GB > 1TB/2 - what article claims
- 512 GiB = 1 TiB/2 - what many assume
- don't mix GiB and GB
-
Agreed, I’d be entirely fine with legal enforcement of the ISO definitions in advertising, no need to air historical dirty laundry outside the profession
-
M2 user here. It is wonderful. You cannot get it to even heat up.
-
Correct. But that means 512 GB is not half a tebibyte.
-
Ah, correct. RAM used GiB, so I guess I implicitly made the switch.
-
Weird that my mind just read that as MKUltra.
Maybe appropriate for AI.
-
Unfortunately that market is already flooded with functionally-useless 8GB machines.
-
How is it a retcon? The use of giga- as a prefix for 10^9^ has been in use as part of the metric system since 1960. I don’t think anyone in the fledgeling computer industry was talking about giga- or mega- anything at that time. The use of mega- as a prefix for 10^6^ has been in use since 1873, over 60 years before Claude Shannon even came up with the concept of a digital computer.
if anything, the use of mega- and giga- to mean 1024 is a retcon over previous usage.
-
No, the ram is integrated into the CPU.
-
The storage prices are insane.
It's over 9 thousand to get the 512GB model, and it still only has 1TB of probably non removable internal storage.2TB is +$400
4TB is +$1000
8TB is +$2200
16TB + $4600They're saying 8TB is worth more than the entire base model Mac Studio at 2k.
For those prices I expect a RAID 5 or 6 system built in, god knows they have the processor for it.
-
This type of thing is mostly used for inference with extremely large models, where a single GPU will have far too little VRAM to even load a model into memory. I doubt people are expecting this to perform particularly fast, they just want to get a model to run at all.