How to run LLaMA (and other LLMs) on Android.
-
[email protected]replied to [email protected] last edited by
Not true. If you load a model that is below your phone's hardware capabilities it simply won't open. Stop spreading fud.
-
[email protected]replied to [email protected] last edited by
But unless you've got a server rack full of data center class GPUs, you'll probably set your house on fire before it generates a single token.
Its cold outside and I don't want to spend money on keeping my house warm so I could.. Try
I'll check them out! Thank you
-
[email protected]replied to [email protected] last edited by
Lol, there are smaller versions of Deepseek-r1. These aren't the "real" Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).
For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.
For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it's a 40 GB file. It'll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that'd be 48 total GB and I could run the whole model in VRAM and it'd be very fast.
-
[email protected]replied to [email protected] last edited by
No house on fire
Thanks! I'll check it out
-
[email protected]replied to [email protected] last edited by
For me the biggest benefits are:
- Your queries don't ever leave your computer
- You don't have to trust a third party with your data
- You know exactly what you're running
- You can tweak most models to your liking
- You can upload sensitive information to it and not worry about it
- It works entirely offline
- You can run several models
-
[email protected]replied to [email protected] last edited by
that's not how it works. Your phone can easily overheat if you use it too much, even if your device can handle it. Smartphones don't have cooling like pcs and laptops (except some rog phone and stuff). If you don't want to fry your processor, only run LLMs on high-end gaming pcs with All in one water cooling
-
@[email protected] Depends on the inference engine. Some of them will try to load the model until it blows up and runs out of memory. Which can cause its own problems. But it won't overheat the phone, no. But if you DO use a model that the phone can run, like any intense computation, it can cause the phone to heat up. Best not run a long inference prompt while the phone is in your pocket, I think.
-
[email protected]replied to [email protected] last edited by
Of course that is something to be mindful of, but that's not what the person in the original comment said. It does run, but you need to be aware of the limitations and potential consequences. That goes without saying, though.
Don't overdo it and your phone will be just fine.
-
[email protected]replied to projectmoon last edited by [email protected]
Thanks for your comment. That for sure is something to look out for. It is really important to know what you're running and what possible limitations there could be.
-
[email protected]replied to [email protected] last edited by
The biggest problem:
- I don't have enough RAM/GPU to run it on a server
But it looks interesting