What's a good local and free LLM model for Windows?
-
wrote on last edited by [email protected]This post did not contain any content.
-
This post did not contain any content.
The OS isn't as important as the hardware being used.
AMD, Nvidia or Intel GPU?
How much RAM & vram are you working with?
What's your CPU?
Generally speaking I would suggest koboldcpp with gemma3.
-
This post did not contain any content.
Use the app Jan, it's a good one and has a place where it recommends models for your hardware
-
The OS isn't as important as the hardware being used.
AMD, Nvidia or Intel GPU?
How much RAM & vram are you working with?
What's your CPU?
Generally speaking I would suggest koboldcpp with gemma3.
What's the minimum requirements for running it?
-
What's the minimum requirements for running it?
Lots of RAM and a good cpu, benefits from cores. if you're comfortable with it being on the slow side.
There's other versions of that model optimized for lower vram conditions too.
But for better performance 8GB of vram minimum.
-
Lots of RAM and a good cpu, benefits from cores. if you're comfortable with it being on the slow side.
There's other versions of that model optimized for lower vram conditions too.
But for better performance 8GB of vram minimum.
wrote on last edited by [email protected]Do you have a recommendation for Nvidia RTX 3070ti 8GB, ryzen 5600x +16GB DDR4? Does it even make sense to use it? Last time I tried the results were petty underwhelming.
-
Do you have a recommendation for Nvidia RTX 3070ti 8GB, ryzen 5600x +16GB DDR4? Does it even make sense to use it? Last time I tried the results were petty underwhelming.
Try it with this model, using Q4_K_S version.
https://huggingface.co/bartowski/mlabonne_gemma-3-12b-it-abliterated-GGUF
You'll probably need to play with the context window size until you get an acceptable level of performance. (Likely 4096)
Ideally you'd have more RAM, but I want to say this smaller model should work. Koboldcpp will try to use both your GPU and CPU to run the model.