Jellyfin Buffering Slow Torrents
-
[email protected]replied to [email protected] last edited by
Are all your containers running in bridged mode or host mode?
-
-
[email protected]replied to [email protected] last edited by
I've accidentally turned on speed limits in Qbit before
-
[email protected]replied to [email protected] last edited by
What resources does your qbittorrent have access to? CPU cores / Memory. I have tried running it on a Rpi and it CHUGS so I had to aggressively apply seeding limits and general # of connection limit too.
-
[email protected]replied to [email protected] last edited by
confirmed all speed limits are off
-
[email protected]replied to [email protected] last edited by
The Jellyfin LXC has 4 core, and the Arr stack w/ qbittorrent LXC also has 4 cores. The containers are running in bridge mode.
-
[email protected]replied to [email protected] last edited by
The qbittorrent docker container runs on an LXC. The LXC has 4 cores and 8 GB memory.
-
[email protected]replied to [email protected] last edited by
You have more than enough cores for each then. Probably too many. As a test, try the qbit or jellyfin one in host mode and see if the network performance changes. I'd start going down the rabbit hole of tuning bridged mode network in LXC, or just keep them on host mode.
-
[email protected]replied to [email protected] last edited by
One other thing I changed recently is the motherboard on the NAS. The new one is DDR5 and I didnt have another machine that takes ddr5 to run the new ram through men test, and I didnt want any downtime so I didnt do it. I just powered down the NAS and started memtest. Do you think a bad stick of ram could be the culprit? At this point in just trying to rule things out
-
[email protected]replied to [email protected] last edited by
Bad RAM wouldn't present like this. You'd more than likely never get past boot with a DDR5 board having caught it with POST tests, or you'd have thrown a kernel exception by now.
I saw you mentioned that a new LXC container didn't have the traffic problem, so this is definitely something with config somehow.
-
[email protected]replied to [email protected] last edited by
Good points. I will finish the memtest thats running if only to have something ruled out. After it finishes I will try attaching the NFS share to the new qbt lxc and see if i get the same slow download speeds.
-
[email protected]replied to [email protected] last edited by
What are the disks and how full is the pool?
-
[email protected]replied to [email protected] last edited by
The pool is a mirrored pair of 14TB drives. Pool is 56% full. SMART tests all pass, but the last scrub took over a week which was odd.
-
[email protected]replied to [email protected] last edited by
OK so I have done some additional testing:
- Memtest passed
- Added the NFS share to the new qbittorrent LXC, and the download speed dropped down to where my primary qbt is. So I believe this means it is related to the NFS share.
- Connected the NAS to a different switch. No change.
- Tried connecting to the NFS share through a different NIC in TrueNAS. No change.
- Migrated the qbt lxc to another proxmox node. No change.
- Created a new NFS share on a different pool on TrueNAS and made that the download directory for qbt. No change.
So I believe I have ruled out memory issues, NIC issues, datapool issues, and switch issues.
The problem is I don't know exactly when this started.
I did change out the motherboard on TrueNAS, and just installed the existing NVMe drives into the new motherboard and booted off of them. I did not install a new TrueNAS OS and restore a backup. Could this be an issue?
Shortly after the motherboard change, I upgraded to Electric Eel.
-
[email protected]replied to [email protected] last edited by
Try a test download without NFS and see what happens.
-
[email protected]replied to [email protected] last edited by
I tested that and I get full speeds. Upwards of 40-60mbps compared with the 1mbps I get when downloading to the NFS share
-
[email protected]replied to [email protected] last edited by
Problem solved then. You know where the bottleneck is.
-
[email protected]replied to [email protected] last edited by
Yes I'm pretty sure I've got it narrowed down to issues with NFS shares from TrueNAS. What I can't figure out is how to fix it. I may do a backup, reinstall truenas, import backup, and see of that fixes it. I'm thinking potentially its an issue from reusing my old installation with the new motherboard, processor, and corresponding hardware.
-
[email protected]replied to [email protected] last edited by
Just don't use NFS for large files. It's not good for that.