How to run LLaMA (and other LLMs) on Android.
-
cross-posted from: https://lemmy.dbzer0.com/post/36841328
Hello, everyone! I wanted to share my experience of successfully running LLaMA on an Android device. The model that performed the best for me was llama3.2:1b on a mid-range phone with around 8 GB of RAM. I was also able to get it up and running on a lower-end phone with 4 GB RAM. However, I also tested several other models that worked quite well, including qwen2.5:0.5b , qwen2.5:1.5b , qwen2.5:3b , smallthinker , tinyllama , deepseek-r1:1.5b , and gemma2:2b. I hope this helps anyone looking to experiment with these models on mobile devices!
Step 1: Install Termux
- Download and install Termux from the Google Play Store or F-Droid
Step 2: Set Up proot-distro and Install Debian
-
Open Termux and update the package list:
pkg update && pkg upgrade
-
Install proot-distro
pkg install proot-distro
-
Install Debian using proot-distro:
proot-distro install debian
-
Log in to the Debian environment:
proot-distro login debian
You will need to log-in every time you want to run Ollama. You will need to repeat this step and all the steps below every time you want to run a model (excluding step 3 and the first half of step 4).
Step 3: Install Dependencies
-
Update the package list in Debian:
apt update && apt upgrade
-
Install curl:
apt install curl
Step 4: Install Ollama
-
Run the following command to download and install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
-
Start the Ollama server:
ollama serve &
After you run this command, do ctrl + c and the server will continue to run in the background.
Step 5: Download and run the Llama3.2:1B Model
- Use the following command to download the Llama3.2:1B model:
This step fetches and runs the lightweight 1-billion-parameter version of the Llama 3.2 model .ollama run llama3.2:1b
Running LLaMA and other similar models on Android devices is definitely achievable, even with mid-range hardware. The performance varies depending on the model size and your device's specifications, but with some experimentation, you can find a setup that works well for your needs. I’ll make sure to keep this post updated if there are any new developments or additional tips that could help improve the experience. If you have any questions or suggestions, feel free to share them below!
– llama
-
-
[email protected]replied to [email protected] last edited by
Llama is proprietary.
-
[email protected]replied to [email protected] last edited by
And what's the purpose of running it locally? Just curious. Is there's anything really libre or better?
Is there any difference between LLaMA or any libre model and ChatGPT (the first and popular I know)
-
[email protected]replied to [email protected] last edited by
Most open/local models require a fraction of the resources of chatgpt. But they are usually not AS good in a general sense. But they often are good enough, and can sometimes surpass ChatGPT in specific domains.
-
[email protected]replied to [email protected] last edited by
you only fry your phone with this. very bad idea
-
[email protected]replied to [email protected] last edited by
Do you know about anything libre? I'm curious to try something. Better if self-hosted (?)
-
[email protected]replied to [email protected] last edited by
They're probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you've got a server rack full of data center class GPUs, you'll probably set your house on fire before it generates a single token.
If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There's also OLmo2, but I haven't really tested it.
Mistral small 24b also just came out and is Apache licensed. That is something I'm testing now.
-
[email protected]replied to [email protected] last edited by
Not true. If you load a model that is below your phone's hardware capabilities it simply won't open. Stop spreading fud.
-
[email protected]replied to [email protected] last edited by
But unless you've got a server rack full of data center class GPUs, you'll probably set your house on fire before it generates a single token.
Its cold outside and I don't want to spend money on keeping my house warm so I could.. Try
I'll check them out! Thank you
-
[email protected]replied to [email protected] last edited by
Lol, there are smaller versions of Deepseek-r1. These aren't the "real" Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).
For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.
For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it's a 40 GB file. It'll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that'd be 48 total GB and I could run the whole model in VRAM and it'd be very fast.
-
[email protected]replied to [email protected] last edited by
No house on fire
Thanks! I'll check it out
-
[email protected]replied to [email protected] last edited by
For me the biggest benefits are:
- Your queries don't ever leave your computer
- You don't have to trust a third party with your data
- You know exactly what you're running
- You can tweak most models to your liking
- You can upload sensitive information to it and not worry about it
- It works entirely offline
- You can run several models
-
[email protected]replied to [email protected] last edited by
that's not how it works. Your phone can easily overheat if you use it too much, even if your device can handle it. Smartphones don't have cooling like pcs and laptops (except some rog phone and stuff). If you don't want to fry your processor, only run LLMs on high-end gaming pcs with All in one water cooling
-
@[email protected] Depends on the inference engine. Some of them will try to load the model until it blows up and runs out of memory. Which can cause its own problems. But it won't overheat the phone, no. But if you DO use a model that the phone can run, like any intense computation, it can cause the phone to heat up. Best not run a long inference prompt while the phone is in your pocket, I think.
-
[email protected]replied to [email protected] last edited by
Of course that is something to be mindful of, but that's not what the person in the original comment said. It does run, but you need to be aware of the limitations and potential consequences. That goes without saying, though.
Don't overdo it and your phone will be just fine.
-
[email protected]replied to projectmoon last edited by [email protected]
Thanks for your comment. That for sure is something to look out for. It is really important to know what you're running and what possible limitations there could be.
-
[email protected]replied to [email protected] last edited by
The biggest problem:
- I don't have enough RAM/GPU to run it on a server
But it looks interesting
-
[email protected]replied to [email protected] last edited by
This is so horrifically wrong, I don't even know where to start.
The short version is that phone and computer makers aren't stupid and they will kill things or shutdown when overheating happens. If you were a phone maker, why tf would you allow someone to fry their own phone?
My laptop has shut itself off when I was trying to compile code while playing video games, while playing twitch. My android phone has killed apps when I try to do too much as well.
-
[email protected]replied to [email protected] last edited by
my phone was fried last week, it needed soc reballing. From watching videos and browsing the web at the same time. Most hardware developers don't pay attention to cooling and these stuff run on hopes and dreams. Plus auto switchoff is only a software solution, and software can have bugs