Prioritizing de-clouding efforts
-
[email protected]replied to [email protected] last edited by
No, but Vaultwarden is the one thing I don't even try to connect to authentik so a breach of the auth password won't give away everything else
-
[email protected]replied to [email protected] last edited by
Photoprism is less "resource intensive" because it's offloading face detection to a cloud service. There are also many who don't like the arbitrary nature of which features photoprism paywalls behind its premium version.
If you can get past immich's initial face recognition and metadata extraction jobs, it's a much more polished experience, but more importantly it aligns with your goal of getting out of the cloud.
-
[email protected]replied to [email protected] last edited by
The biggest thing I'm seeing here is the creation of a bottleneck for your network services, and potential for catastrophic failure. Here's where I forsee problems:
- Running everything from a single HDD(?) is going to throw your entire home and network into disarray if it fails. Consider at least adding a second drive for RAID1 if you can.
- You're going to run into I/O issues with the imbalance of the services you're cramming all together.
- You don't mention backups. I'd definitely work that out first. Some of these services can take their own, but what about the bulk data volumes?
- You don't mention the specs of the host, but I'd make sure you have swap equal to RAM here if youre not worried about disk space. This will just prevent hard kernel I/O issues or OOMkills if it comes to that.
- Move network services first, storage second, n2h last.
-
[email protected]replied to [email protected] last edited by
#1 items should be backups. (Well maybe #2 so that you have something to back up, but don't delete the source data until the backups are running.)
You need offsite backups, and ideally multiple locations.
-
[email protected]replied to [email protected] last edited by
I would recommend running Home Assistant OS in a VM instead of using the docker container.
-
[email protected]replied to [email protected] last edited by
Why is that?
-
[email protected]replied to [email protected] last edited by
You get easy access to their addons with a VM (aka HAOS). You can do the same thing yourself but you have to do it all (creating the containers, configuring them, figuring out how to connect them to HA/your network/etc., updating them as needed) - whereas with HAOS it generally just works. If you want that control great but go in with that understanding.
-
[email protected]replied to [email protected] last edited by
This looks exciting. I hope the transition goes well.
I would say to get automated backups running on the first few before you do them all. Thats a luxury we get "for free" with cloud services.
Note on firefly iii. I use it because I've been using it, but after using it for 4ish years, I dont really recommend it. The way I use it anyway, I think inserting data could be easier (I do it manually on purpose) and the graphs/visualizations I also wish were better. My experience with search functionality is also sub par. I would look at other alternatives as well, but I think its still better than not tracking finances at all. But I wonder if using a database client to insert data and python scripts or grafana to analyze the data would be better for me....YMMV
Good luck!
-
[email protected]replied to [email protected] last edited by
Everything @[email protected] said and because backups to Home Assistant OS also include addons, which is just very convenient.
My Proxmox setup has 3 VMs:
- Home Assistant OS with all the add-ons (containers) specific to Home Assistant
- TrueNAS with an HBA card using PCIe passthrough
- VM for all other services
Also, if you ever plan to switch from a virtualized environment to bare metal servers, this layout makes switching over dead easy.
-
[email protected]replied to [email protected] last edited by
The main difference i would say is the development and licensing model.
Photo prism is forcing ppl who want to commit to sign a CLA to.give away their rights. Also the community is not really active it is mainly one dev that can change the code license on any given time.Immich does not have such an agreement and has a huge active contributor community around it. Also Immich is backed by Futo which has its pros and cons.
Imho the biggest pain in self hosting is when a foss product turns evil towards its community and start to practice anti consumer/free selfhosters business practices.
Immich is far less likely to turn evil.
-
[email protected]replied to [email protected] last edited by
This might be the last chance to migrate from Gitea to Forgejo and avoid whatever trainwreck Gitea is heading for. It's going to a hardfork soon.
-
[email protected]replied to [email protected] last edited by
What hardware are you looking at?
I would do a three node cluster (maybe even 5 node)
-
[email protected]replied to [email protected] last edited by
That's surely overkill for my use level. Most of these services are only really listening to the web port most of the time. Yes, some like Immich or Paperless-ngx do some brief intense processing, but I am skeptical that I need nearly that much separation. I am using an AMD Ryzen 7 5825U. I am open to ideas, but I also press hard against over-investing in hardware for a single-person home setup.
-
[email protected]replied to [email protected] last edited by
That's very relevant. Thanks for the heads-up. I will look into that.
-
[email protected]replied to [email protected] last edited by
Good insights, thank you for the perspective. I will look into that more closely before committing.
-
[email protected]replied to [email protected] last edited by
I do have a backup plan. I will use the on-board SSD for the main system and an additional 1Tb HDD for an incremental backup of the entire system with ZFS, all to guard against garden-variety disk corruption. I also take total system copies to keep in a fire safe.
-
[email protected]replied to [email protected] last edited by
A three node cluster gives you much more flexibility and potentially uptime (assuming you do some sort of HA)
If you server has a major issue it is really nice to be able to offload to different hardware. I learned this the hard way.
-
[email protected]replied to [email protected] last edited by
I have a fine backup strategy and I don't really want to go into it here. I am considering my ecosystem of services at this point.
I am skeptical that this will overload my i/o if I build it slowly and allocate the resources properly. It may be the rate-limiting factor in some very occasional situations, but never a real over-load situation. Most of these services only sit and listen on their respective ports most of the time. Only a few do intense processing and even then only on upload of new files or when streaming.
I really resist throwing a lot of excess power at a single-user system. It goes against my whole ethos of appropriate and proportional tech.
-
[email protected]replied to [email protected] last edited by
Forgejo becase a hard fork about a year ago: https://forgejo.org/2024-02-forking-forward/
And it seems that migration from Gitea is only possible up to Gitea version 1.22: https://forgejo.org/2024-12-gitea-compatibility/ -
[email protected]replied to [email protected] last edited by
Thanks, a solid suggestion.
I have explored that direction and would 100% agree for most home setups. I specifically need HA running in an unsupervised environment, so Add-ons are not on the table anyway. The containerized version works well for me so far and it's consistent with my overall services scheme. I am developing an integration and there's a whole other story to my setup that includes different networks and test servers for customer simulations using fresh installs of HASS OS and the like.