Docker in LXC vs VM
-
Honestly you can do either.
LXC
-
shares host kernel (theoretically lighter weight)
-
less isolation from host (less secure)
-
devices are passed via device files
-
less flexible due to dependence on host
-
no live transfers
-
filesystem shared with host
virtualization
-
has own kernel and filesystem
-
supports live transfers
-
hardware pass though is done at the device level
-
more flexible due to independent kernel
-
more overhead
-
-
Personally I just Mount file shares within the VM
-
People are probably looking for tools like cloud init, butane and Ansible
-
Also LXC shares the host filesystem to there is less concern with corruption due to power loss.
-
I'm guessing people are largely using the wrong terminology for things that make more sense, like backing up/snapshotting config and data that containers use. Maybe they're also backing up images (which a lot of people call "containers"), just in case it gets yanked from wherever they got it from.
That said, yeah, someone should write a primer on how to use Docker properly and link it in the sidebar. Something like:
- docker-compose or podman for managing containers (a lot easier than
docker run
) - how to use bind mounts and set permissions, as well as sharing volumes between containers (esp. useful if your TLS cert renewal is a separate container from your TLS server)
- docker networks - how to get containers to talk w/o exposing their ports system-wide (I only expose two ports, Caddy for TLS, and Jellyfin because my old smart TV can't seem to handle TLS)
- how tags work - i.e. when to use latest, the difference between
<image>:<major>.<minor>.<patch>
and<image>:<major>
, etc, and updating images (i.e. what happens when you "pull")
I've been using docker for years, but I'm sure the are some best practices I am missing since I'm more of a developer than a sysadmin.
- docker-compose or podman for managing containers (a lot easier than
-
I think they mean a VM running docker
-
You don't have to revert 8 services, you can stop/start them independently:
docker compose stop <service name>
.This is actually how I update my services, I just stop the ones I want to update, pull, and restart them. I do them one or two at a time, mostly to mitigate issues. The same is true for pulling down new versions, my process is:
- edit the docker-compose file to update the image version(s) (e.g. from 1.0 -> 1.1, or 1.1 -> 2.0); I check changelog/release notes to check for any manual upgrade notices
- pull new images (doesn't impact running services)
docker compose up -d
brings up any stopped services using new image(s)- test
- go back to 1 until all services are done
I do this whenever I remember, and it works pretty well.
-
I don't use proxmox, but it works absolutely fine for me on my regular Linux system, which has a firewall, some background services, etc. Could you be more specific on the issues you're running into?
Also, I only really expose two services on my host:
- Caddy - handles all TLS and proxies to all other services in the internal docker network
- Jellyfin - my crappy smart TV doesn't seem to be able to handle Jellyfin + TLS for some reason, it causes the app to lock up
Everything else just connects through an internal-only docker network.
If you're getting conflicts, I'm guessing you've configured things oddly, because by default, docker creates its own virtual interface to explicitly not interfere with anything else on the host.
-
These should absolutely no place in the mix with containers at all. Very confused how you've made these work of that's what you're suggesting.
-
If you're familiar with Linux, just read the Dockerfile of any given project. It's literally just a script for running a thing. You can take that info and install how you'd like if needed.
-
A couple posts down explains it, docker completely steamrolls networking when you install it. https://forum.proxmox.com/threads/running-docker-on-the-proxmox-host-not-in-vm-ct.147580/
The other reason is if it's on the host you can't back it up using proxmox backup server with the rest of the VMs/CTs
-
How do you handle backups? Install restic or whatever in every container and set it up? What about updates for the OS and docker images, watchtower on them I imagine?
It sounds like a ton of admin overhead for no real benefit to me.
-
I don't use proxmox, so I guess I don't understand the appeal. I don't see any reason to backup a container or a VM, I just backup configs and data. Backing up a VM makes sense if you have a bunch of customizations, but that's pretty much the entire point of docker, you quarantine your customizations to your configs so it's completely reproducible if you have the configs and data.
-
No, I mean they should setup VMs and LXC containers in automated way. I get the impression that some people here are trying to use a Dockerfile instead of something like Ansible where the end changes apply to a end system instead of creating a template for temporary deployments.
-
I don't think the internet gave particularly good advice here. Sure, there are use-cases for both, and that's why we have both approaches available. But you can't say VMs are better than containers. They're a different thing. They might even be worse in your case. But I mean in the end, all "simple thruths" are wrong.
-
Ease of use mostly, one click to restore everything including the OS is nice. Can also easily move them to other hosts for HA or maintenance.
Not everything runs in docker too, so it's extra useful for those VMs.
-
I just snapshot the parent lxc. The data itself isn't part of the container at any level, so if I bung up compose yml or env, I can just flip it back. The only real benefit is that all my backups are in the same place in the same format.
Like I'm not actually opposed to managing docker in one unit, I just haven't got there yet and this has worked so far.
If I were to move to a single platform for several docker, what would you suggest? For admin and backups?
-
Oh, nice. Thanks!
This is me showing my docker ignorance, I suppose.
-
That's fair.
That said, I can't think of anything I'd want to run that doesn't work in docker, except maybe pf? But I'd probably put that on a dedicated machine anyway. Pretty much everything else runs on Linux or has a completely viable Linux alternative, so I could easily built a docker image for it.
-
- I’m backing up LXCs, like I’d back up a VM. I don’t back up Docker containers, just their config and volumes.
- I don’t think anyone is doing that. We’re talking about installing Docker in LXC. One of the Proxmox rules you can live by is to not install software on the host. I don’t see the problem with installing Docker in an LXC for that reason.
- I’ll snapshot an LXC before running things like a dist-upgrade, or testing something that might break things. It’s very easy, so why not?
- I back up my LXC that has Docker installed because that way it’s easy to restore everything, including local volumes, to various points in time.