Prioritizing de-clouding efforts
-
[email protected]replied to [email protected] last edited by
I would recommend running Home Assistant OS in a VM instead of using the docker container.
-
[email protected]replied to [email protected] last edited by
Why is that?
-
[email protected]replied to [email protected] last edited by
You get easy access to their addons with a VM (aka HAOS). You can do the same thing yourself but you have to do it all (creating the containers, configuring them, figuring out how to connect them to HA/your network/etc., updating them as needed) - whereas with HAOS it generally just works. If you want that control great but go in with that understanding.
-
[email protected]replied to [email protected] last edited by
This looks exciting. I hope the transition goes well.
I would say to get automated backups running on the first few before you do them all. Thats a luxury we get "for free" with cloud services.
Note on firefly iii. I use it because I've been using it, but after using it for 4ish years, I dont really recommend it. The way I use it anyway, I think inserting data could be easier (I do it manually on purpose) and the graphs/visualizations I also wish were better. My experience with search functionality is also sub par. I would look at other alternatives as well, but I think its still better than not tracking finances at all. But I wonder if using a database client to insert data and python scripts or grafana to analyze the data would be better for me....YMMV
Good luck!
-
[email protected]replied to [email protected] last edited by
Everything @[email protected] said and because backups to Home Assistant OS also include addons, which is just very convenient.
My Proxmox setup has 3 VMs:
- Home Assistant OS with all the add-ons (containers) specific to Home Assistant
- TrueNAS with an HBA card using PCIe passthrough
- VM for all other services
Also, if you ever plan to switch from a virtualized environment to bare metal servers, this layout makes switching over dead easy.
-
[email protected]replied to [email protected] last edited by
The main difference i would say is the development and licensing model.
Photo prism is forcing ppl who want to commit to sign a CLA to.give away their rights. Also the community is not really active it is mainly one dev that can change the code license on any given time.Immich does not have such an agreement and has a huge active contributor community around it. Also Immich is backed by Futo which has its pros and cons.
Imho the biggest pain in self hosting is when a foss product turns evil towards its community and start to practice anti consumer/free selfhosters business practices.
Immich is far less likely to turn evil.
-
[email protected]replied to [email protected] last edited by
This might be the last chance to migrate from Gitea to Forgejo and avoid whatever trainwreck Gitea is heading for. It's going to a hardfork soon.
-
[email protected]replied to [email protected] last edited by
What hardware are you looking at?
I would do a three node cluster (maybe even 5 node)
-
[email protected]replied to [email protected] last edited by
That's surely overkill for my use level. Most of these services are only really listening to the web port most of the time. Yes, some like Immich or Paperless-ngx do some brief intense processing, but I am skeptical that I need nearly that much separation. I am using an AMD Ryzen 7 5825U. I am open to ideas, but I also press hard against over-investing in hardware for a single-person home setup.
-
[email protected]replied to [email protected] last edited by
That's very relevant. Thanks for the heads-up. I will look into that.
-
[email protected]replied to [email protected] last edited by
Good insights, thank you for the perspective. I will look into that more closely before committing.
-
[email protected]replied to [email protected] last edited by
I do have a backup plan. I will use the on-board SSD for the main system and an additional 1Tb HDD for an incremental backup of the entire system with ZFS, all to guard against garden-variety disk corruption. I also take total system copies to keep in a fire safe.
-
[email protected]replied to [email protected] last edited by
A three node cluster gives you much more flexibility and potentially uptime (assuming you do some sort of HA)
If you server has a major issue it is really nice to be able to offload to different hardware. I learned this the hard way.
-
[email protected]replied to [email protected] last edited by
I have a fine backup strategy and I don't really want to go into it here. I am considering my ecosystem of services at this point.
I am skeptical that this will overload my i/o if I build it slowly and allocate the resources properly. It may be the rate-limiting factor in some very occasional situations, but never a real over-load situation. Most of these services only sit and listen on their respective ports most of the time. Only a few do intense processing and even then only on upload of new files or when streaming.
I really resist throwing a lot of excess power at a single-user system. It goes against my whole ethos of appropriate and proportional tech.
-
[email protected]replied to [email protected] last edited by
Forgejo becase a hard fork about a year ago: https://forgejo.org/2024-02-forking-forward/
And it seems that migration from Gitea is only possible up to Gitea version 1.22: https://forgejo.org/2024-12-gitea-compatibility/ -
[email protected]replied to [email protected] last edited by
Thanks, a solid suggestion.
I have explored that direction and would 100% agree for most home setups. I specifically need HA running in an unsupervised environment, so Add-ons are not on the table anyway. The containerized version works well for me so far and it's consistent with my overall services scheme. I am developing an integration and there's a whole other story to my setup that includes different networks and test servers for customer simulations using fresh installs of HASS OS and the like.
-
[email protected]replied to [email protected] last edited by
Proxmox backup server is my jam, great first party deduplicated incremental backups. You can also spin up more than 1 and sync between them
-
[email protected]replied to [email protected] last edited by
Looks good, I use a lot of the stuff you plan to host.
Don't forget about enabling infrastructure. Nearly everything needs a database, so get that figured out early on. An LDAP server is also helpful, even though you can just use the file backend of Authelia. Decide if you want to enable access from outside and choose a suitable reverse proxy with a solution for certificates, if you did not already do that.
Hosting Grafana on the same host as all other services will give you no benefit if the host goes offline. If you plan to monitor that too.
I'd get the LDAP server, the database and the reverse proxy running first. Afterwards configure Authelia and and try to implement authentication for the first project. Gitea/Forgejo is a good first one, you can setup OIDC or Remote-User authentication with it. If you've got this down, the other projects are a breeze to set up.
Best of luck with your migration.
-
[email protected]replied to [email protected] last edited by
Something I recently added that I’ve enjoyed is uptime Lima. It’s simple but versatile for monitoring and notifications.
-
[email protected]replied to [email protected] last edited by
Oh boy, can of worms just opened. Awesome insight. I do have an ecosystem of servers already and i have a pi zero 2 set aside to develop as a dedicated system watchdog for the whole shebang. I have multiple wifi networks segregated for testing and personal use. Use both built in wifi for the network connection and a wifi adapter to scan my sub networks.
So great insight and it helps some things click into place.