Prioritizing de-clouding efforts
-
[email protected]replied to [email protected] last edited by
A three node cluster gives you much more flexibility and potentially uptime (assuming you do some sort of HA)
If you server has a major issue it is really nice to be able to offload to different hardware. I learned this the hard way.
-
[email protected]replied to [email protected] last edited by
I have a fine backup strategy and I don't really want to go into it here. I am considering my ecosystem of services at this point.
I am skeptical that this will overload my i/o if I build it slowly and allocate the resources properly. It may be the rate-limiting factor in some very occasional situations, but never a real over-load situation. Most of these services only sit and listen on their respective ports most of the time. Only a few do intense processing and even then only on upload of new files or when streaming.
I really resist throwing a lot of excess power at a single-user system. It goes against my whole ethos of appropriate and proportional tech.
-
[email protected]replied to [email protected] last edited by
Forgejo becase a hard fork about a year ago: https://forgejo.org/2024-02-forking-forward/
And it seems that migration from Gitea is only possible up to Gitea version 1.22: https://forgejo.org/2024-12-gitea-compatibility/ -
[email protected]replied to [email protected] last edited by
Thanks, a solid suggestion.
I have explored that direction and would 100% agree for most home setups. I specifically need HA running in an unsupervised environment, so Add-ons are not on the table anyway. The containerized version works well for me so far and it's consistent with my overall services scheme. I am developing an integration and there's a whole other story to my setup that includes different networks and test servers for customer simulations using fresh installs of HASS OS and the like.
-
[email protected]replied to [email protected] last edited by
Proxmox backup server is my jam, great first party deduplicated incremental backups. You can also spin up more than 1 and sync between them
-
[email protected]replied to [email protected] last edited by
Looks good, I use a lot of the stuff you plan to host.
Don't forget about enabling infrastructure. Nearly everything needs a database, so get that figured out early on. An LDAP server is also helpful, even though you can just use the file backend of Authelia. Decide if you want to enable access from outside and choose a suitable reverse proxy with a solution for certificates, if you did not already do that.
Hosting Grafana on the same host as all other services will give you no benefit if the host goes offline. If you plan to monitor that too.
I'd get the LDAP server, the database and the reverse proxy running first. Afterwards configure Authelia and and try to implement authentication for the first project. Gitea/Forgejo is a good first one, you can setup OIDC or Remote-User authentication with it. If you've got this down, the other projects are a breeze to set up.
Best of luck with your migration.
-
[email protected]replied to [email protected] last edited by
Something I recently added that I’ve enjoyed is uptime Lima. It’s simple but versatile for monitoring and notifications.
-
[email protected]replied to [email protected] last edited by
Oh boy, can of worms just opened. Awesome insight. I do have an ecosystem of servers already and i have a pi zero 2 set aside to develop as a dedicated system watchdog for the whole shebang. I have multiple wifi networks segregated for testing and personal use. Use both built in wifi for the network connection and a wifi adapter to scan my sub networks.
So great insight and it helps some things click into place.
-
[email protected]replied to [email protected] last edited by
For Home Assistant, I use the installation script from here, it works flawlessly:
https://community-scripts.github.io/ProxmoxVE/scripts
This group took over the project after the main developer passed on, they are quite easy to install and just need you to be in the Proxmox host shell (Once you install it, you will know where it is)
-
[email protected]replied to [email protected] last edited by
I also started with a Docker host in Proxmox, but have since switched to k3s, as I think it has reduced maintenance (mainly through FluxCD). But this is only an option if you want to learn k8s or already have experience.
If Proxmox runs on a consumer ssd, I would keep an eye on the smart values, as it used up the disk quickly in my case. I then bought second-hand enterprise ssds and have had no more problems since then. You could also outsource the intensive write processes or use an hdd for root if possible.
I put my storage controller directly into the VM via PCI, as it makes backups via zfs easier and I was able to avoid a speed bottleneck. However, the bottleneck was mainly caused by the (virtualized) firewall and the internal communication via it. As a result, the internal services could only communicate with a little more than 1GBit/s, although they were running on ssds and nvme raids.
I use sqlite databases when I can, because the backups are much easier and the speed feels faster in most cases. However, the file should ideally be available locally for the vm.
Otherwise I would prioritize logging and alerts, but as an experienced server admin you have probably thought of that.
-
[email protected]replied to [email protected] last edited by
Also used homeassistant as an appliance. I won't bother doing that thing into docker.
-
[email protected]replied to [email protected] last edited by
Why clustering? What do you need HA ina home environment.
I could care less if my Jellyfin server went under for some hours of downtime due to some config change.
Will some be unhappy because my stuff isnt available? Maybe. Do I care about it? Depends on who it is.Anyway: Way overkill outside of homelabbing and gaining experience fpr the lols.
-
[email protected]replied to [email protected] last edited by
reusing passwords on internal
Please implement a password manager.
Bitwarden can do almost anything on the free tier and the few perks cost 10$ per year which arent even mandatory for actual usage.
-
[email protected]replied to [email protected] last edited by
May I ask why you'd want to selfhost bitwarden if the free hosted version is almost as good aside from the few unimportant paid perks?
-
[email protected]replied to [email protected] last edited by
Regarding mini PCs; Beware of RAM overheating!
I bought some Minisforum HM90 for Proxmox selfhosting, installed 64gb RAM (2x32gb DDR4 3200MHz sticks), ran memtest first to ensure the RAM was good, and all 3 mini PCs failed to various degrees.
The "best" would run for a couple of days and tens of passes before throwing multiple errors (tens of errors) then run for another few days without errors.
Turns out the RAM overheated. 85-95 C surface temperature. (There's almost no space or openings for air circulation on that side of the PC). Taking the lid off the PC, let 2/3 computers run memtest for a week with no errors, but one still gave the occasional error bursts. RAM surface temperature with the lid off was still 80-85 C.
Adding a small fan creating a small draft dropped the temperature to 55-60 C.
I then left the computer running memtest for a few weeks while I was away, then another few weeks while busy with other stuff. It has now been 6 weeks of continuous memtest, so I'm fairly confident in the integrity of the RAM, as long as they're cold.Turns out also some, but not all, RAM sticks have onboard temperature sensors.
lm-sensors
can read the RAM temperature, if the sticks have the sensor. So I'm making a Arduino solution to monitor the temperature with a IR sensor and also control an extra fan. -
[email protected]replied to [email protected] last edited by
Thanks for the tip on measuring temp of the ram, too. I will incorporate that into my monitoring scheme.
The mini pc I have has a good case design with a fan that blows across the ram, cpu and ssd. So I think it has good cooling, but I will definitely confirm with some monitoring.
-
[email protected]replied to [email protected] last edited by
Good call out on the smart values. That’s on the priority list for my monitoring scheme now too.
-
[email protected]replied to [email protected] last edited by
I don't want to spend a bunch of time troubleshooting something. Having a way to move my stuff to a different host when the host crashing is very nice.
-
[email protected]replied to [email protected] last edited by
Nice, my HM90s have a really great cooling solution for the CPU (big silent fan, fine finned heat sink). But no cooling on the bottom side of the main board, which houses the RAM, a NVMe and two 2,5" SATA SSDs.
As usual, the arch wiki is super helpful also for non-arch distros
https://wiki.archlinux.org/title/Lm_sensors#Adding_DIMM_temperature_sensors -
[email protected]replied to [email protected] last edited by
I don't?