PSA: If the first Smart Search in Immich takes a while
-
So why would you not write out the full path?
The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my
compose
folder and reinstalled Rasp Pi OS under a different username than my last install.If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.
I'd just do it with a simple search and replace. Have done. I feel like relative paths leave too much room for human error.
-
./
will be the directory you run your compose fromI'm almost sure that
./
is the directory of the compose.yaml.Normally I just run
docker compose up -d
in the project directory, but I could rundocker compose up -d -f /somewhere/compose.yaml
from another directory, and then the./
would be/somewhere
, and not the directory where I started the command. -
Running now.
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
-
Because I clean everything up that's not explicitly on disk on restart:
[Unit] Description=Immich in Docker After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 WorkingDirectory=/opt/immich-docker ExecStartPre=-/usr/bin/docker compose kill --remove-orphans ExecStartPre=-/usr/bin/docker compose down --remove-orphans ExecStartPre=-/usr/bin/docker compose rm -f -s -v ExecStartPre=-/usr/bin/docker compose pull ExecStart=/usr/bin/docker compose up Restart=always RestartSec=30 [Install] WantedBy=multi-user.target
Wow, you pull new images every time you boot up? Coming from a mindset of having rock solid stability, this scares me. You're living your life on the edge my friend. I wish I could do that.
-
Wow, you pull new images every time you boot up? Coming from a mindset of having rock solid stability, this scares me. You're living your life on the edge my friend. I wish I could do that.
wrote last edited by [email protected]I use a fixed tag.
It's more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn't change, the pull is a noop.
-
I use a fixed tag.
It's more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn't change, the pull is a noop.
Ahh, calmed me down. Never thought of doing anything like you're doing it here, but I do like it.
-
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.
Out of 140,000+ photos/videos, it's down to 104,000 and I have it set to 6 concurrent tasks.
I don't mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I'm still able to use the app and website to navigate and search without any delays.
-
I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.
Out of 140,000+ photos/videos, it's down to 104,000 and I have it set to 6 concurrent tasks.
I don't mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I'm still able to use the app and website to navigate and search without any delays.
Let me know how the search performs once it's done. Speed of search, subjective quality, etc.
-
I switched to the same model. It's absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.
Seems to work really well. I can do obscure searches like Outer Wilds and it will pull up pictures I took from my phone of random gameplay moments, so it's not doing any filename or metadata cheating there.
-
Let me know how the search performs once it's done. Speed of search, subjective quality, etc.
Search speed was never an issue before, and neither was quality. My biggest gripe is not being able to sort search by date! If I had that, it would be perfect.
But I'll update you once it's done (at 97,000 to go... )
-
Because I clean everything up that's not explicitly on disk on restart:
[Unit] Description=Immich in Docker After=docker.service Requires=docker.service [Service] TimeoutStartSec=0 WorkingDirectory=/opt/immich-docker ExecStartPre=-/usr/bin/docker compose kill --remove-orphans ExecStartPre=-/usr/bin/docker compose down --remove-orphans ExecStartPre=-/usr/bin/docker compose rm -f -s -v ExecStartPre=-/usr/bin/docker compose pull ExecStart=/usr/bin/docker compose up Restart=always RestartSec=30 [Install] WantedBy=multi-user.target
wrote last edited by [email protected]That's wild! What advantage do you get from it, or is it just because you can for fun?
Also I've never seen a service created for each docker stack like that before..
-
That's wild! What advantage do you get from it, or is it just because you can for fun?
Also I've never seen a service created for each docker stack like that before..
Well, you gotta start it somehow. You could rely on compose'es built-in service management which will restart containers upon system reboot if they were started with
-d
, and have the right restart policy. But you still have to start those at least once. How'd you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to dodocker compose up -d
. But then I'm splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issuedocker compose down
. Not great. Instead I'd write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that's kinda what I'm doing isn't it? Except if I start it withdocker compose up
without-d
, I don't need a separate stop line and systemd can directly monitor the process. As a result I get logs injournald
too, and I can use systemd's restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It's way more powerful than compose's restart policy. Finally, I like to clean up any data I haven't explicitly intended to persist across service restarts so that I don't end up in a situation where I'm debugging an issue that manifests itself because of some persisted piece of data I'm completely unaware of. -
Well, you gotta start it somehow. You could rely on compose'es built-in service management which will restart containers upon system reboot if they were started with
-d
, and have the right restart policy. But you still have to start those at least once. How'd you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to dodocker compose up -d
. But then I'm splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issuedocker compose down
. Not great. Instead I'd write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that's kinda what I'm doing isn't it? Except if I start it withdocker compose up
without-d
, I don't need a separate stop line and systemd can directly monitor the process. As a result I get logs injournald
too, and I can use systemd's restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It's way more powerful than compose's restart policy. Finally, I like to clean up any data I haven't explicitly intended to persist across service restarts so that I don't end up in a situation where I'm debugging an issue that manifests itself because of some persisted piece of data I'm completely unaware of.Interesting, waiting on network mounts could be useful!
I deploy everything through Komodo so it's handling the initial start of the stack, updates, logs, etc..