Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. PSA: If the first Smart Search in Immich takes a while

PSA: If the first Smart Search in Immich takes a while

Scheduled Pinned Locked Moved Selfhosted
selfhosted
40 Posts 17 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • avidamoeba@lemmy.caA [email protected]

    Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.

    In my case I changed this:

      immich-machine-learning:
        ...
        volumes:
          - model-cache:/cache
    

    To that:

      immich-machine-learning:
        ...
        volumes:
          - ./cache:/cache
    

    I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.

    That'll be all.

    I This user is from outside of this forum
    I This user is from outside of this forum
    [email protected]
    wrote last edited by
    #11

    It's not normal for - model-cache:/cache to be deleted on restart or even upgrade. You shouldn't need to do this.

    avidamoeba@lemmy.caA 1 Reply Last reply
    12
    • I [email protected]

      It's not normal for - model-cache:/cache to be deleted on restart or even upgrade. You shouldn't need to do this.

      avidamoeba@lemmy.caA This user is from outside of this forum
      avidamoeba@lemmy.caA This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #12

      Yes, it depends on how you're managing the service. If you're using one of the common patterns via systemd, you may be cleaning up everything, including old volumes, like I do.

      E: Also if you have any sort of lazy prune op running on a timer, it could blow it up at some point.

      1 Reply Last reply
      0
      • S [email protected]

        Is this something that would be recommended if self-hosting off a Synology 920+ NAS?

        My NAS does have extra ram to spare because I upgraded it, and has NVME cache 🤗

        avidamoeba@lemmy.caA This user is from outside of this forum
        avidamoeba@lemmy.caA This user is from outside of this forum
        [email protected]
        wrote last edited by [email protected]
        #13

        That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.

        S 1 Reply Last reply
        0
        • avidamoeba@lemmy.caA [email protected]

          That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote last edited by
          #14

          That’s a Celeron right?

          Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

          I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don't worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

          avidamoeba@lemmy.caA I 2 Replies Last reply
          0
          • avidamoeba@lemmy.caA [email protected]

            Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.

            In my case I changed this:

              immich-machine-learning:
                ...
                volumes:
                  - model-cache:/cache
            

            To that:

              immich-machine-learning:
                ...
                volumes:
                  - ./cache:/cache
            

            I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.

            That'll be all.

            mangopenguin@lemmy.blahaj.zoneM This user is from outside of this forum
            mangopenguin@lemmy.blahaj.zoneM This user is from outside of this forum
            [email protected]
            wrote last edited by [email protected]
            #15

            Doing a volume like the default Immich docker-compose uses should work fine, even through restarts. I'm not sure why your setup is blowing up the volume.

            Normally volumes are only removed if there is no running container associated with it, and you manually run docker volume prune

            avidamoeba@lemmy.caA 1 Reply Last reply
            5
            • S [email protected]

              That’s a Celeron right?

              Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

              I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don't worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

              avidamoeba@lemmy.caA This user is from outside of this forum
              avidamoeba@lemmy.caA This user is from outside of this forum
              [email protected]
              wrote last edited by
              #16

              Did you run the Smart Search job?

              S 1 Reply Last reply
              0
              • S [email protected]

                That seems like a bad idea

                N This user is from outside of this forum
                N This user is from outside of this forum
                [email protected]
                wrote last edited by
                #17

                It's not.

                1 Reply Last reply
                0
                • mangopenguin@lemmy.blahaj.zoneM [email protected]

                  Doing a volume like the default Immich docker-compose uses should work fine, even through restarts. I'm not sure why your setup is blowing up the volume.

                  Normally volumes are only removed if there is no running container associated with it, and you manually run docker volume prune

                  avidamoeba@lemmy.caA This user is from outside of this forum
                  avidamoeba@lemmy.caA This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #18

                  Because I clean everything up that's not explicitly on disk on restart:

                  [Unit]
                  Description=Immich in Docker
                  After=docker.service 
                  Requires=docker.service
                  
                  [Service]
                  TimeoutStartSec=0
                  
                  WorkingDirectory=/opt/immich-docker
                  
                  ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
                  ExecStartPre=-/usr/bin/docker compose down --remove-orphans
                  ExecStartPre=-/usr/bin/docker compose rm -f -s -v
                  ExecStartPre=-/usr/bin/docker compose pull
                  ExecStart=/usr/bin/docker compose up
                  
                  Restart=always
                  RestartSec=30
                  
                  [Install]
                  WantedBy=multi-user.target
                  
                  W P mangopenguin@lemmy.blahaj.zoneM 3 Replies Last reply
                  1
                  • M [email protected]

                    Its convenient because your data is stored in the same folder that your docker-compose.yaml file is in, making backups or migrations simpler.

                    avidamoeba@lemmy.caA This user is from outside of this forum
                    avidamoeba@lemmy.caA This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #19

                    Yup. Everything is in one place and there's no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.

                    1 Reply Last reply
                    0
                    • avidamoeba@lemmy.caA [email protected]

                      Because I clean everything up that's not explicitly on disk on restart:

                      [Unit]
                      Description=Immich in Docker
                      After=docker.service 
                      Requires=docker.service
                      
                      [Service]
                      TimeoutStartSec=0
                      
                      WorkingDirectory=/opt/immich-docker
                      
                      ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
                      ExecStartPre=-/usr/bin/docker compose down --remove-orphans
                      ExecStartPre=-/usr/bin/docker compose rm -f -s -v
                      ExecStartPre=-/usr/bin/docker compose pull
                      ExecStart=/usr/bin/docker compose up
                      
                      Restart=always
                      RestartSec=30
                      
                      [Install]
                      WantedBy=multi-user.target
                      
                      W This user is from outside of this forum
                      W This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #20

                      But why?

                      why not just down up normally and have a cleanup job on a schedule to get rid of any orphans?

                      C 1 Reply Last reply
                      7
                      • S [email protected]

                        That seems like a bad idea

                        ohshit604@sh.itjust.worksO This user is from outside of this forum
                        ohshit604@sh.itjust.worksO This user is from outside of this forum
                        [email protected]
                        wrote last edited by [email protected]
                        #21

                        As other stated it’s not a bad way of managing volumes. In my scenario I store all volumes in a /config folder.

                        For example on my SearXNG instance I have a volume like such:

                        services:
                          searxng:
                            …
                            volumes:
                              - ./config/searx:/etc/searxng:rw
                        

                        This makes the files for SearXNG two folders away. I also store these in the /home/YourUser directory so docker avoids using sudoers access whenever possible.

                        S 1 Reply Last reply
                        0
                        • W [email protected]

                          But why?

                          why not just down up normally and have a cleanup job on a schedule to get rid of any orphans?

                          C This user is from outside of this forum
                          C This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #22

                          But why?

                          I a world where we can't really be sure what's in an upgrade, a super-clean start that burns any ephemeral data is about the best way to ensure a consistent start.

                          And consistency gives reliability, as much as we can get without validation (validation is "compare to what's correct", but consistency is "try to repeat whatever it was").

                          1 Reply Last reply
                          1
                          • S [email protected]

                            That’s a Celeron right?

                            Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

                            I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don't worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

                            I This user is from outside of this forum
                            I This user is from outside of this forum
                            [email protected]
                            wrote last edited by [email protected]
                            #23

                            What's your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

                            S 1 Reply Last reply
                            0
                            • ohshit604@sh.itjust.worksO [email protected]

                              As other stated it’s not a bad way of managing volumes. In my scenario I store all volumes in a /config folder.

                              For example on my SearXNG instance I have a volume like such:

                              services:
                                searxng:
                                  …
                                  volumes:
                                    - ./config/searx:/etc/searxng:rw
                              

                              This makes the files for SearXNG two folders away. I also store these in the /home/YourUser directory so docker avoids using sudoers access whenever possible.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #24

                              So why would you not write out the full path? I frequently rerun compose commands from various places, if I'm troubleshooting an issue.

                              ohshit604@sh.itjust.worksO 1 Reply Last reply
                              0
                              • avidamoeba@lemmy.caA [email protected]

                                Did you run the Smart Search job?

                                S This user is from outside of this forum
                                S This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #25

                                Running now.

                                avidamoeba@lemmy.caA 1 Reply Last reply
                                0
                                • I [email protected]

                                  What's your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #26

                                  Seemed to be the most popular. LOL The smart search job hasn't been running for long, so I'll check that other one out and see how it compares. If it looks better, I can easily use that.

                                  1 Reply Last reply
                                  1
                                  • S [email protected]

                                    So why would you not write out the full path? I frequently rerun compose commands from various places, if I'm troubleshooting an issue.

                                    ohshit604@sh.itjust.worksO This user is from outside of this forum
                                    ohshit604@sh.itjust.worksO This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by [email protected]
                                    #27

                                    So why would you not write out the full path?

                                    The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my compose folder and reinstalled Rasp Pi OS under a different username than my last install.

                                    If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.

                                    S 1 Reply Last reply
                                    0
                                    • ohshit604@sh.itjust.worksO [email protected]

                                      So why would you not write out the full path?

                                      The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my compose folder and reinstalled Rasp Pi OS under a different username than my last install.

                                      If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.

                                      S This user is from outside of this forum
                                      S This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #28

                                      I'd just do it with a simple search and replace. Have done. I feel like relative paths leave too much room for human error.

                                      1 Reply Last reply
                                      0
                                      • mhzawadi@lemmy.horwood.cloudM [email protected]

                                        ./ will be the directory you run your compose from

                                        E This user is from outside of this forum
                                        E This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #29

                                        I'm almost sure that ./ is the directory of the compose.yaml.

                                        Normally I just run docker compose up -d in the project directory, but I could run docker compose up -d -f /somewhere/compose.yaml from another directory, and then the ./ would be /somewhere, and not the directory where I started the command.

                                        1 Reply Last reply
                                        0
                                        • S [email protected]

                                          Running now.

                                          avidamoeba@lemmy.caA This user is from outside of this forum
                                          avidamoeba@lemmy.caA This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #30

                                          Let me know how inference goes. I might recommend that to a friend with a similar CPU.

                                          S 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups