Docker in LXC vs VM
-
I used to use LXC, and switched to VM since internet said it was better.
I kinda miss the LXC setup. Day to day I don't notice any difference, but increasing storage space in VM was a small pain compared to LXC. In VM I increased disk size through proxmox, but then I had to increase the partition inside VM.
In LXC you can just increase disk size and it immediately is available to the containers
-
This thread has raised so many questions I'd like answered:
- Why are people backing up containers?
- Why are people running docker-in-docker?
- I saw someone mention snapshotting containers...what's the purpose of this?
- Why are people backing up docker installs?
Seriously thought I was going crazy reading some of these, and now I'm convinced the majority of people posting suggestions in here do not understand how to use containers at all.
Flat file configs, volumes, layers, versioning...it's like people don't know what these are are how to use them, and that is incredibly disconcerting.
-
Because a lot of people don’t learn docker, they install docker because some software they want to use is distributed that way.
-
Dont listen to them! The main issue with containers vs vm is security as you lxc runs in the hosts, while a vm runs on the host.
Use what you are familiar with and remember that lxc are containers and docker are containers, but the use of them are vastly different.
-
You dont need or want docker on your vm host. But a bare metal docker host can solve many peoples needs.
-
Yup, this is me exactly. I’ve been planning on going more indepth but haven’t found the time. Inunderstand Linux and how to use LXCs, docker less so.
-
What in the world are you talking about? It's literally the entire point of containers orchestration systems, and the reason why you don't run containers inside containers. It's makes zero sense.
-
It’s a dedicated server (a small Dell micro-pc). Thanks for the comment, I understand the logic, I was approaching it more from an end-user perspective of what’s easier to work with. Which given my skill set are LXC containers. I have a VM on top of Proxmox specifically for Docker
-
Honestly, I never really thought of installing Docker directly on Proxmox. I guess that might be a simpler solution, to run Dockers directly, but I kind of like to keep the hypervisor more stripped down.
-
Follow-up question: do you have any good resources to start with for a simple overview on how we should be using containers? I’m not a developer, and from my experiences most documentation on the topic I’ve come across targets developers and devops people. As someone else mentioned, I use docker because it’s the way lots of things happen to be packaged - I’m more used to the Debian APT way of doing things.
-
I don't have anything handy, but I see your point, and I'd shame lazy devs for not properly packaging things maybe
You mentioned you use Proxmox, which is already an abstraction on bare-metal, so that's about as easy as easy an interface as I can imagine for a hosted machine without using something like Docker Desktop and using it to manage a machine remotely (not a good idea).
As a develop, I guess I was slightly confused on some suggestions on ways to use things being posted in this sub, but some of the responses I guess clarify that. There isn't enough simplicity in explaining the "what" of containers, so people just use them the simplest way they understand, which also happens to be the "wrong way". It's kind of hard to grasp that when you live with these things 24/7 for years. Kind of a similar deal with networking solutions like Tailscale where I see people installing it everywhere and not understanding why that's a bad idea
So save you a lot of learning, I'll just not go down a rabbit hole if you just want something to work well. Ping back here if you get into a spot of trouble, and I'll definitely hop in to give a more detailed explanation on a workflow that is more effective than what it seems most people in here are using.
In fact, I may have just been inspired to do a write up on it.
-
Fair enough, would love to read something like this
Yeah, I’ve been into Linux for 20 years, sometimes a bit on/off, as an all-around-sysadmin in mainly Windows places. And learned just enough of Docker to use it instead of apt - which I’d prefer, but as you said, many newer services don’t exist in debian repos or as .deb packages, only docker or similar.
-
Honestly you can do either.
LXC
-
shares host kernel (theoretically lighter weight)
-
less isolation from host (less secure)
-
devices are passed via device files
-
less flexible due to dependence on host
-
no live transfers
-
filesystem shared with host
virtualization
-
has own kernel and filesystem
-
supports live transfers
-
hardware pass though is done at the device level
-
more flexible due to independent kernel
-
more overhead
-
-
Personally I just Mount file shares within the VM
-
People are probably looking for tools like cloud init, butane and Ansible
-
Also LXC shares the host filesystem to there is less concern with corruption due to power loss.
-
I'm guessing people are largely using the wrong terminology for things that make more sense, like backing up/snapshotting config and data that containers use. Maybe they're also backing up images (which a lot of people call "containers"), just in case it gets yanked from wherever they got it from.
That said, yeah, someone should write a primer on how to use Docker properly and link it in the sidebar. Something like:
- docker-compose or podman for managing containers (a lot easier than
docker run
) - how to use bind mounts and set permissions, as well as sharing volumes between containers (esp. useful if your TLS cert renewal is a separate container from your TLS server)
- docker networks - how to get containers to talk w/o exposing their ports system-wide (I only expose two ports, Caddy for TLS, and Jellyfin because my old smart TV can't seem to handle TLS)
- how tags work - i.e. when to use latest, the difference between
<image>:<major>.<minor>.<patch>
and<image>:<major>
, etc, and updating images (i.e. what happens when you "pull")
I've been using docker for years, but I'm sure the are some best practices I am missing since I'm more of a developer than a sysadmin.
- docker-compose or podman for managing containers (a lot easier than
-
I think they mean a VM running docker
-
You don't have to revert 8 services, you can stop/start them independently:
docker compose stop <service name>
.This is actually how I update my services, I just stop the ones I want to update, pull, and restart them. I do them one or two at a time, mostly to mitigate issues. The same is true for pulling down new versions, my process is:
- edit the docker-compose file to update the image version(s) (e.g. from 1.0 -> 1.1, or 1.1 -> 2.0); I check changelog/release notes to check for any manual upgrade notices
- pull new images (doesn't impact running services)
docker compose up -d
brings up any stopped services using new image(s)- test
- go back to 1 until all services are done
I do this whenever I remember, and it works pretty well.
-
I don't use proxmox, but it works absolutely fine for me on my regular Linux system, which has a firewall, some background services, etc. Could you be more specific on the issues you're running into?
Also, I only really expose two services on my host:
- Caddy - handles all TLS and proxies to all other services in the internal docker network
- Jellyfin - my crappy smart TV doesn't seem to be able to handle Jellyfin + TLS for some reason, it causes the app to lock up
Everything else just connects through an internal-only docker network.
If you're getting conflicts, I'm guessing you've configured things oddly, because by default, docker creates its own virtual interface to explicitly not interfere with anything else on the host.