I've written a series of blog posts about a "hands-off" self-hosting setup intended for relative beginners.
-
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
it'll still cause downtime, and they'll probably have a hard time restoring from backup for the first few times it happens, if not for other reason then stress. especially when it updates the wrong moment, or wrong day.
they will leave vulnerable, un-updated containers exposed to the web
that's the point. Services shouldn't be exposed to the web, unless the person really knows what they are doing, took the precautions, and applies updates soon after release.
exposing it to the VPN and to tge LAN should be plenty for most. there's still a risk, but much lower
"backups with Syncthing"
Consider warning the reader that it will not be obvious if backups have stopped, or if a sync folder on the backup pc is in an inconsistent state because of it, as errors are only shown on the web interface or third party tools
-
I don't disagree with any of that, I'm merely making a different value judgement - namely that a breach that could've been prevented by automatic updates is worse than an outage caused by the same.
I will however make this choice more explicit in the articles and outline the risks.
with properly limited access the breach is much, much less likely, and an update bringing down an important service at the bad moment does not need to be a thing
-
it'll still cause downtime, and they'll probably have a hard time restoring from backup for the first few times it happens, if not for other reason then stress. especially when it updates the wrong moment, or wrong day.
they will leave vulnerable, un-updated containers exposed to the web
that's the point. Services shouldn't be exposed to the web, unless the person really knows what they are doing, took the precautions, and applies updates soon after release.
exposing it to the VPN and to tge LAN should be plenty for most. there's still a risk, but much lower
"backups with Syncthing"
Consider warning the reader that it will not be obvious if backups have stopped, or if a sync folder on the backup pc is in an inconsistent state because of it, as errors are only shown on the web interface or third party tools
Yeah I agree with the warnings. One of the things I'm trying to ensure I get across accurately (which will be discussed later in the series) is how to do monitoring. Making sure backups are functioning properly would need to be a part of that.
-
Set up automatic updates
Immich
You like to live dangerously, right?
Yeah a little xD but FWIW this article series is based on what I personally run (and have set up for several friends) and its been doing pretty well for at least a year.
But I have backups which can be used to recover from the issues with breaking updates.
-
that's horrible and funny at the same time.
I will assume they fixed that vuln later
wrote on last edited by [email protected]That’s not a vulnerability. That’s intended and desired behavior. It was really useful in this case too.
I should mention that the WebDAV share is password protected, so only he has access to do that.
-
That’s not a vulnerability. That’s intended and desired behavior. It was really useful in this case too.
I should mention that the WebDAV share is password protected, so only he has access to do that.
ok, a backdoor then. can they overwrite any file with it?
-
Set up automatic updates
Immich
You like to live dangerously, right?
Photoprism > Immich
-
Naturally, the same day that I publish this, I discover that Watchtower is semi-abandoned, so I'm gonna have to look into alternatives to that...
Podman has optional autoupdates
-
Photoprism > Immich
I haven't tried photoprism in a while, but when I tried it, it wasn't even close.
Photoprism seems more suited if you're a photographer to index your professional work where immich aims to be a google photos/icloud alternative.
Immich has native mobile apps to do the syncing and provide a (great) interface for search, it has much better multi-user support, including sharing albums, and much more features than I'm willing to type out here.
The only thing missing, for me at least, is better support for local files to eliminate the need for another gallery app/file picker.
-
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
wrote on last edited by [email protected]I’m with you on this. It has to feel at least somewhat low-fuss/turnkey or people aren’t going to stick with it. The people who don’t get this are the same people who can’t see why Plex is more popular than Jellyfin despite the latter’s overall superiority
-
ok, a backdoor then. can they overwrite any file with it?
It’s their machine. It’s a front door.
-
I don't disagree with any of that, I'm merely making a different value judgement - namely that a breach that could've been prevented by automatic updates is worse than an outage caused by the same.
I will however make this choice more explicit in the articles and outline the risks.
Don't expose anything outside of the tailnet and 99% of the potential problems are gone. Noobs should not expose services across a firewall. Period.
-
My experience after 35 years in IT: I've had 10x more outages caused by automatic updates than everything else combined.
Also after 35 years of running my own stuff at home, and practically never updating anything, I've never had an outage caused by a lack of updates.
Let's not act like auto updates is without risk. Just look at how often Microsoft has to roll out a fix for something an update broke. Inexperienced users are going to be clueless when an update breaks something.
We should be teaching new people how to manage systems, this includes proper update checks on a cycle, with appropriate validation that everything works afterwards, and the ability to roll back if there's an issue.
This isn't an Enterprise where you simply can't manually manage updates across hundreds or thousands of servers, and tens of thousands of workstations - this is a single admin, small environment.
I do monthly update checks, update where I feel it's warranted, and verify systems afterwards.
Well, you just saved me a bunch of time trying to figure out how to auto-update my humble little server. Granted, I only have Plex and Samba Share right now, but I like the principle. Hell, an update once blanked my smb config file for whatever reason
Now auto-backups are another thing; because I would like to use a .tar file, but then it leads me down a rabbit hole because I don't know how to repair Grub if needed for a restore, or what Grub really even is vs Bios... I've just been learning as I go
I'm a few weeks away from getting a couple parts for an upgrade, and then it'll be some fun. I want to redo it from scratch and maybe set up proxmox and change my file system to zfs, then start looking at docker, figure out Jellyfin and look at some ARR stuff... maybe tailscale or headscale. Idk, it's just fun cause it's a hobby. I just haven't had the storage or ram really, but soon
-
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
Been in it since the web was a thing. I agree wholeheartedly. If people don't run auto updates and newbies will not run manual updates, You're just teaching them how to make vulnerabilities.
Let them learn how to fix an automatic update failure rather than how to recover from ransomware. No contest here.
-
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
You say this as though security is naturally a consideration for most docker images.
-
Recently, I've found myself walking several friends through what is essentially the same basic setup:
- Install Ubuntu server
- Install Docker
- Configure Tailscale
- Configure Dockge
- Set up automatic updates on Ubuntu/Apt and Dockge/Docker
- Self-host a few web apps, some publicly available, some on the Tailnet.
After realizing that this setup is generally pretty good for relative newcomers to self-hosting and is pretty stable (in the sense that it runs for a while and remains up-to-date without much human interference) I decided that I should write a few blog posts about how it works so that other people can set it up for themselves.
As of right now, there's:
- An introduction (with Ubuntu basics)
- Tailscale setup
- Optional Docker Explainer
- Dockge setup with watchtower for automatic updates
- MicroBin as a first self-hosted webapp
Coming soon:
- Immich
- Backups with Syncthing
- Jellyfin
- Elementary monitoring with Homepage
- Cloudflare Tunnels
Constructive feedback is always appreciated.
EDIT: Forgot to mention that I am planning a backups article
Did I miss the part where we set up the server?
-
Recently, I've found myself walking several friends through what is essentially the same basic setup:
- Install Ubuntu server
- Install Docker
- Configure Tailscale
- Configure Dockge
- Set up automatic updates on Ubuntu/Apt and Dockge/Docker
- Self-host a few web apps, some publicly available, some on the Tailnet.
After realizing that this setup is generally pretty good for relative newcomers to self-hosting and is pretty stable (in the sense that it runs for a while and remains up-to-date without much human interference) I decided that I should write a few blog posts about how it works so that other people can set it up for themselves.
As of right now, there's:
- An introduction (with Ubuntu basics)
- Tailscale setup
- Optional Docker Explainer
- Dockge setup with watchtower for automatic updates
- MicroBin as a first self-hosted webapp
Coming soon:
- Immich
- Backups with Syncthing
- Jellyfin
- Elementary monitoring with Homepage
- Cloudflare Tunnels
Constructive feedback is always appreciated.
EDIT: Forgot to mention that I am planning a backups article
Try Pangolin instead of cloudfare, though it requires a VPS (e.g. oracle free tier, or pay €1/month to ionos)
-
Try Pangolin instead of cloudfare, though it requires a VPS (e.g. oracle free tier, or pay €1/month to ionos)
I'm hesitant to ask because I'm running pangolin also, but why are there downvotes here? did i miss something about pangolin?
-
I'm hesitant to ask because I'm running pangolin also, but why are there downvotes here? did i miss something about pangolin?
it could also be because I mentioned oracle
-
it could also be because I mentioned oracle
i guess i hope so!