Prioritizing de-clouding efforts
-
[email protected]replied to [email protected] last edited by
That's very relevant. Thanks for the heads-up. I will look into that.
-
[email protected]replied to [email protected] last edited by
Good insights, thank you for the perspective. I will look into that more closely before committing.
-
[email protected]replied to [email protected] last edited by
I do have a backup plan. I will use the on-board SSD for the main system and an additional 1Tb HDD for an incremental backup of the entire system with ZFS, all to guard against garden-variety disk corruption. I also take total system copies to keep in a fire safe.
-
[email protected]replied to [email protected] last edited by
A three node cluster gives you much more flexibility and potentially uptime (assuming you do some sort of HA)
If you server has a major issue it is really nice to be able to offload to different hardware. I learned this the hard way.
-
[email protected]replied to [email protected] last edited by
I have a fine backup strategy and I don't really want to go into it here. I am considering my ecosystem of services at this point.
I am skeptical that this will overload my i/o if I build it slowly and allocate the resources properly. It may be the rate-limiting factor in some very occasional situations, but never a real over-load situation. Most of these services only sit and listen on their respective ports most of the time. Only a few do intense processing and even then only on upload of new files or when streaming.
I really resist throwing a lot of excess power at a single-user system. It goes against my whole ethos of appropriate and proportional tech.
-
[email protected]replied to [email protected] last edited by
Forgejo becase a hard fork about a year ago: https://forgejo.org/2024-02-forking-forward/
And it seems that migration from Gitea is only possible up to Gitea version 1.22: https://forgejo.org/2024-12-gitea-compatibility/ -
[email protected]replied to [email protected] last edited by
Thanks, a solid suggestion.
I have explored that direction and would 100% agree for most home setups. I specifically need HA running in an unsupervised environment, so Add-ons are not on the table anyway. The containerized version works well for me so far and it's consistent with my overall services scheme. I am developing an integration and there's a whole other story to my setup that includes different networks and test servers for customer simulations using fresh installs of HASS OS and the like.
-
[email protected]replied to [email protected] last edited by
Proxmox backup server is my jam, great first party deduplicated incremental backups. You can also spin up more than 1 and sync between them
-
[email protected]replied to [email protected] last edited by
Looks good, I use a lot of the stuff you plan to host.
Don't forget about enabling infrastructure. Nearly everything needs a database, so get that figured out early on. An LDAP server is also helpful, even though you can just use the file backend of Authelia. Decide if you want to enable access from outside and choose a suitable reverse proxy with a solution for certificates, if you did not already do that.
Hosting Grafana on the same host as all other services will give you no benefit if the host goes offline. If you plan to monitor that too.
I'd get the LDAP server, the database and the reverse proxy running first. Afterwards configure Authelia and and try to implement authentication for the first project. Gitea/Forgejo is a good first one, you can setup OIDC or Remote-User authentication with it. If you've got this down, the other projects are a breeze to set up.
Best of luck with your migration.
-
[email protected]replied to [email protected] last edited by
Something I recently added that I’ve enjoyed is uptime Lima. It’s simple but versatile for monitoring and notifications.
-
[email protected]replied to [email protected] last edited by
Oh boy, can of worms just opened. Awesome insight. I do have an ecosystem of servers already and i have a pi zero 2 set aside to develop as a dedicated system watchdog for the whole shebang. I have multiple wifi networks segregated for testing and personal use. Use both built in wifi for the network connection and a wifi adapter to scan my sub networks.
So great insight and it helps some things click into place.
-
[email protected]replied to [email protected] last edited by
For Home Assistant, I use the installation script from here, it works flawlessly:
https://community-scripts.github.io/ProxmoxVE/scripts
This group took over the project after the main developer passed on, they are quite easy to install and just need you to be in the Proxmox host shell (Once you install it, you will know where it is)
-
[email protected]replied to [email protected] last edited by
I also started with a Docker host in Proxmox, but have since switched to k3s, as I think it has reduced maintenance (mainly through FluxCD). But this is only an option if you want to learn k8s or already have experience.
If Proxmox runs on a consumer ssd, I would keep an eye on the smart values, as it used up the disk quickly in my case. I then bought second-hand enterprise ssds and have had no more problems since then. You could also outsource the intensive write processes or use an hdd for root if possible.
I put my storage controller directly into the VM via PCI, as it makes backups via zfs easier and I was able to avoid a speed bottleneck. However, the bottleneck was mainly caused by the (virtualized) firewall and the internal communication via it. As a result, the internal services could only communicate with a little more than 1GBit/s, although they were running on ssds and nvme raids.
I use sqlite databases when I can, because the backups are much easier and the speed feels faster in most cases. However, the file should ideally be available locally for the vm.
Otherwise I would prioritize logging and alerts, but as an experienced server admin you have probably thought of that.
-
[email protected]replied to [email protected] last edited by
Also used homeassistant as an appliance. I won't bother doing that thing into docker.
-
[email protected]replied to [email protected] last edited by
Why clustering? What do you need HA ina home environment.
I could care less if my Jellyfin server went under for some hours of downtime due to some config change.
Will some be unhappy because my stuff isnt available? Maybe. Do I care about it? Depends on who it is.Anyway: Way overkill outside of homelabbing and gaining experience fpr the lols.
-
[email protected]replied to [email protected] last edited by
reusing passwords on internal
Please implement a password manager.
Bitwarden can do almost anything on the free tier and the few perks cost 10$ per year which arent even mandatory for actual usage.