Why use Named volume vs Anonymous volume in Docker?
-
[email protected]replied to [email protected] last edited by
I choose depending on whether I'll ever have to touch the files in the volume (e.g. for configuration), except for debugging where I spawn a shell. If I don't need to touch them, I don't want to see them in my config folder where the compose file is in. I usually check my compose folders into git, and this way I don't have to put the volumes into gitignore.
-
[email protected]replied to [email protected] last edited by
On a simpler level, it's just an organizational thing. There are lots of other ways data from docker is consumed, and looking through a bunch of random hashes and trying to figure out what is what is insane.
-
[email protected]replied to [email protected] last edited by
Good question, I'm interested too.
Personally I use this kind of mappingvolumes: - /var/docker/contanier_name/data:/data
because it helps me with backups, while I keep all the docker-compose.yaml in
/home/user/docker-compose/container_name
so I can mess with the compose folder whithout worrying too much about what's inside you -
[email protected]replied to [email protected] last edited by
Or just something as simple as using a SMB/CIFS share for your data. Instead of mounting the share before running your container, you can make Docker do it by specifying it like this:
services: my-service: ... volumes: - my-smb-share:/data:rw volumes: my-smb-share: driver_opts: type: "smb3" device: "//mynas/share" o: "rw,vers=3.1.1,addr=192.168.1.20,username=mbirth,password=supersecret,cache=loose,iocharset=utf8,noperm,hard"
For
type
you can use anything you have amount.<type>
tool available, e.g. on my Raspberry this would be:$ ls /usr/sbin/mount.* /usr/sbin/mount.cifs* /usr/sbin/mount.fuse3* /usr/sbin/mount.nilfs2* /usr/sbin/mount.ntfs-3g@ /usr/sbin/mount.ubifs* /usr/sbin/mount.fuse@ /usr/sbin/mount.lowntfs-3g@ /usr/sbin/mount.ntfs@ /usr/sbin/mount.smb3@
And the
o
parameter is everything you would put as options to the mount command (e.g. in the 4th column in/etc/fstab
). In the case of smb3, you can runmount.smb3 --help
to see a list of available options.Doing it this way, Docker will make sure the share is mounted before running the container. Also, if you move the compose file to a different host, it'll just work if the share is reachable from that new location.
-
[email protected]replied to [email protected] last edited by
I like named volumes, externally created, because they are less likely to be cleaned up without explicit deletion. There's also a few occasions I need to jump into a volume to edit files but the regular container doesn't have the tools I need so it's easier to mount by name rather than hash value.
-
[email protected]replied to [email protected] last edited by
Ok I did not know about this at all. I've been just mounting it on the host which has been a bit of a pain at times.
I just did a massive refactor of my stacks, but now I might have to revisit them to do this.
-
[email protected]replied to [email protected] last edited by
what?? im definetly using this thanks for makong me aware of it.
-
[email protected]replied to [email protected] last edited by
I use NFS shares for all of my volumes so they're more portable for future expansion and easier to back up. It uses additional disk space for the cache of course, but i have plenty.
When I add a second server or add a dedicated storage device as I expand, it has made it easier to move with almost no effort.
-
[email protected]replied to [email protected] last edited by
Dumb question⌠first time Iâm even hearing the volume mount being referred to as âanonymousâ⌠but is there a way to use named volume with an explicit local directory mount? I use named volumes for NFS mount when I have multiple machines, but I still have some âislandâ machines thatâs isolated in their own region/availability zone, so theyâre mounting just local directories and I use the âanonymousâ volumes on those. So if I can consistently mount local directories the same way, I think itâd be a huge win for consistency.
-
[email protected]replied to [email protected] last edited by
There's also an NFSv4 driver which is great when you're running TrueNAS
-
[email protected]replied to [email protected] last edited by
I don't really have a technical reason, but I do only named volumes to keep things clear and tidy, specially compose files with databases.
When I do a backup I run a script that saves each volumes/database/compose files well organized in directories archived with tar.
In have this structure in my home directory:
/home/user/docker/application_name/docker-compose.yaml
and it only contains the docker-compose.yml file (some times .env/Docker file).I dunno if this is the most efficient way or even the best way to do things but It also helps me to keep everything separate between all the necessary config files and the actual files (like for Jellyfin) and it seems easier to switch over If I only need one part and not the other (uhhr sorry for my badly worded English, I hope it makes sense).
Other than that I also like to tinker arround and learn things Adding complexity gives me some kind of challenge? XD
-
[email protected]replied to [email protected] last edited by
I like having everything to do with a container in one folder, so I use ./ the bind mounts. Then I don't have to go hunting all over hells half acre for the various mounts that docker makes. If I backup/restore a folder, I know I have everything to do with that stack right there.
-
[email protected]replied to [email protected] last edited by
I like named volumes, because all my data is in one place. Makes backups easy.
-
[email protected]replied to [email protected] last edited by
Supposedly docker volumes are faster than plain bind mounts; but I've not really noticed a difference.
They also allow you to use docker commands to backup and restore volumes.
Finally you can specify storage drivers, which let you do things like mount a network share (ssh, samba, nfs, etc) or a cloud storage solution directly to the container.
Personally I just use bind mounts for pretty much every bit of persistent data. I prefer to keep my compose files alongside the container data organized to my standards in an easy to find folder. I also like being able to navigate those files without having to use docker commands, and regularly back them up with borg.
-
[email protected]replied to [email protected] last edited by
I don't have to mess around with permissions hellhole using named volume. It is beautifully seamless with guaranteed persistence. No more messing around with PUID and PGID.
-
[email protected]replied to [email protected] last edited by
That makes sense. I've only ever used local storage on the docker-VM, but for sure it can make sense for using external storage
-
[email protected]replied to [email protected] last edited by
Wow thanks for this! Reading the official docker documentation I somehow missed this. Using regular well documented linux mount.<type> tools and options will be so much better than looking for docker-specific documentation for every single type.
And knowing the docker container won't start unless the mount is available solves so much.
Does the container stop or freeze if the mount becomes unavailable? For example if the smb share host goes offline? -
[email protected]replied to [email protected] last edited by
Yeah that's fair, permission issues can be a pain to deal with. Guess I've been lucky I haven't had any significant issues with permissions and docker-containers specifically yet.
-
[email protected]replied to [email protected] last edited by
This has been my thinking too.
Though after reading mbirth's comment I realised it's possible to use named volumes and explicitly tell it where on disk to store the volume:
volumes: - my-named-volume:/data/ volumes: my-named-volume: driver: local driver_opts: type: none device: "./folder-next-to-compose-yml" # device: "/path/to/well/known/folder" o: bind
It's a bit verbose, but at least I know which folder and partition holds the data, while keeping the benefits of named volumes.
-
[email protected]replied to [email protected] last edited by
How does this work? Where is additional space used for cache, server or client?
Or are you saying everything is on one host at the moment, and you use NFS from the host to the docker container (on the same host)?