Can I run ollama on RTX 3060 and Inter iGPU to increase speed?
-
I'm on Arch Linux btw. and I have a RTX 3060 with 12 GB VRAM which is cool so a 14b model fits into the VRAM. It works quite well but I wonder if there is any way to help with the speed even more by trying to utilize the iGPU in my Intel 14600K. It always just sits there not doing anything.
But I don't know if it even makes sense to try. From what I read in some comments on the internet, the bottleneck will be the ram speed in the iGPU, which will use my normal ram which is a magnitude slower than the VRAM.
Does anyone have any experience with that?
-
-
[email protected]replied to [email protected] last edited by
Models are computed sequentially (the output of each layer is the input into the next layer in the sequence) so more GPUs do not offer any kind of performance benefit
-
[email protected]replied to [email protected] last edited by
I see, that's a shame, thanks for explaining it.
-
[email protected]replied to [email protected] last edited by
More gpus do improve performance:
https://medium.com/@geronimo7/llms-multi-gpu-inference-with-accelerate-5a8333e4c5db
All large AI systems are built of multiple "gpus" (AI processers like Blackwell ). Large AI of individual servers connected by 800 GB/s network interfaces.
However igpus are so slow that it wouldn't offer significant performance improvement.
-
[email protected]replied to [email protected] last edited by
You can. But I don't think it will help.
https://medium.com/@mayvic/llm-multi-gpu-batch-inference-with-accelerate-edadbef3e239