Do they not know it works offline too?
-
Do they not know it works offline too?
I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful
-
-
[email protected]replied to [email protected] last edited by
Is it possible to download it without first signing up to their website?
-
[email protected]replied to [email protected] last edited by
You can get it from Ollama
-
[email protected]replied to [email protected] last edited by
Thanks
Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn't have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn't supported. And running it on a CPU is painfully slow. I'm using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon
-
[email protected]replied to [email protected] last edited by
If you're setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.
-
[email protected]replied to [email protected] last edited by
Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.
In my opinion setup instructions should guide you to a usable setup. It's a missed opportunity not to include a
docker-compose.yml
connecting the two. Is anyone really using ollama without a UI? -
[email protected]replied to [email protected] last edited by
The link I posted has a command that sets them up together though