How do you keep up?
-
[email protected]replied to [email protected] last edited by
I run a Fedora server.
All of my apps are in docker containers set to restart unless stopped by me.
Then I run a cron job that is scheduled at like 3 or 4am that runs docker pull on all containers and restarts them. Then it runs all system uldtwa and restarts the server.
Every week or so I just spot check to make sure it is still working. This has been my process for like 6 months without issue.
-
[email protected]replied to [email protected] last edited by
At least you get updates. I'm running TruNAS core which isn't updated anymore, and I have some jails doing things so I can't migrate to scale easially.
The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.
The bad news is those jails are doing useful things and because I'm out of date I can't update what is in them. Some of those services have new versions that add new features that I really really want.
I have ordered (should arrive tomorrow) a N100 which I'm going to manually migrate the useful services to one at a time. Once that is doing I'll probably switch to XigmaNAS so I can stick with FreeBSD. (I've always preferred FreeBSD). That will leave my NAS as just file storage for a while, though depending on how I like XigmaNAS I might or might not run services on that.
-
[email protected]replied to [email protected] last edited by
Try watchtower instead of cron jobs
-
[email protected]replied to [email protected] last edited by
I'll check it out! Thanks!
-
[email protected]replied to [email protected] last edited by
Core is still getting updates?i got one last week.
-
[email protected]replied to [email protected] last edited by
only the most basic security. It is out of date according to the pkg system and so jails cannot be updated-
-
[email protected]replied to [email protected] last edited by
Super lame. BSD is very preferable for core systems like this.
-
[email protected]replied to [email protected] last edited by
Do you run Talos on bare metal or on something like Proxmox? Care to discuss your k8s stack?
-
[email protected]replied to [email protected] last edited by
Thanks for this. I've recently been recreating my home server on good hardware and have been thinking it's time to jump into selfhosting more stuff. I've used Docker a bit, so I guess I'll have to do it the right way. It's always good to know what choices now will avoid future issues.
-
[email protected]replied to [email protected] last edited by
I use debian, so what's to keep up with? Apt upgrade is literally everything I need. My home server doesn't take a lot of my time except when I want to tweak something or introduce something new. I dont really follow all the trendy stuff at all and just have it do what I need.
-
The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.
if all users and devices on the network are well behaved and don't install every random app, even if from the play store, then yeah, it's less of a risk
-
[email protected]replied to [email protected] last edited by
Automatically upgrading docker images sounds like a recipe for disaster because:
- could pull down change that requires manual intervention, so things "randomly" break
- docker holds on to everything, so you'd need to prune old images or you'll eventually run out of disk space; if a container is stopped, your prune would make it unbootable (good luck if the newer images are incompatible with when it last ran)
That's why I refuse to automate updates. I sometimes go weeks or months between using a given service, so I'd rather use vulnerable containers than have to go fix it when I need it.
I run OS updates every month or two, and honestly I'd be okay automating those. I run docker pulls every few months, and there's no way I'd automate that.
-
[email protected]replied to [email protected] last edited by
I've encountered that before with Watchtower updating parts of a serrvice and breaking everything the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that.
Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I'll rebuild and update manually. I think that'll be a net time save over all.
I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.
-
[email protected]replied to [email protected] last edited by
Watchtower
Glad it works for you.
Automatic updates of software with potential breaking changes scares me. I'm not familiar with watchtower, since I don't use it or anything like it, but I have several services that I don't use very often, but would suck if they silently stopped working properly.
When I think of a service, I think of something like Nextcloud, Immich, etc, even if they consist of multiple containers. For example, I have a separate containers for libre office online and Nextcloud, but I upgrade them together. I don't want automated upgrades of either because I never know if future builds will be compatible. So I go update things when I remember, but I make sure everything works after.
That said, it seems watchtower can be used to merely notify, so maybe I'll use it for that. I certainly want to be around for any automatic updates though.
-
[email protected]replied to [email protected] last edited by
This is why I'm still using a Synology ¯\(ツ)/¯
I can install all the fun stuff I want in Docker, but for the major OS stuff, it's outsourced to Synology to maintain for me
-
[email protected]replied to [email protected] last edited by
Depends on your stance on risk since WatchTower has to run as privileged
-
[email protected]replied to [email protected] last edited by
It's Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it.
Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can't ping a service, then I go investigating.
I get that it's scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I'm managing 5 machines now, and scaling by getting more so I have to think about scale.
-
[email protected]replied to [email protected] last edited by
I run proxmox on the host with docker in a VM for 90% of my stuff, OS updates I do like every 6 months maybe, I've done 1 major version upgrade on proxmox with no issues at all.
The docker containers auto-update via Komodo, and nothing really ever breaks anymore other than the occasional container error that needs a simple fix.
Everything important is backed up nightly using both proxmox backup server, and to backblaze B2 with restic.
-
[email protected]replied to [email protected] last edited by
This is a good point. Generally if can accomplish what I want with my own scripts, I will go that route. I'll probably avoid adding additional software to the mix since what I have works fine enough.