Docker in LXC vs VM
-
You could create a fresh container, install docker, and create a new template image from it. This way the overhead of installing disapears. The overhead in resource usage for each docker installation would remain the same as before.
As mentioned in another reply, you could run several container in one lxc. For example with docker compose or podman. Since I have no experience with podman but with docker compose, docker compose is pretty simple.
But all in all, I prefer to install everything "bare metal" in lxc containers. The main reason is, I don't want to mess around with the extra layer of configurating ports etc.
-
Is your server a dedicated server, or a VPS? Because if it's a VPS, you're probably already running in a VM.
Adding a VM might provide more security, especially if you aren't an expert in LXC security configuration. It will add overhead. Running Docker inside Docker provides nothing but more overhead and unnecessary complexity to your setup.
Also, because it isn't clear to me from your post: LXC and Docker are two ways of doing the same thing, using the same Kernel capabilities. Docker was, in fact, written in top of LXC. The only real difference is the container format. Saying "running Docker on LXC" is like saying "running Docker on Docker," or "running Docker on Podman," or "running LXC on Docker". All you're doing is nesting container implementations. As opposed to VMs, which do not just use Linux namespace capabilities, and which emulate an entirely different computer.
LXC, Podman, and Docker use the underlying OS kernel and resources. VMs create new, virtual hardware (necessarily sharing the same hardware architecture, but nothing else from the host) and run their own kernels.
Saying "Docker VM" is therefore confusing. Containers - LXC, Podman, or Docker - don't create VMs. They partition and segregate off resources from the host, but they do not provide a virtual machine. You can not run OpenBSD in a Docker container on Linux; you can run OpenBSD in a VM on Linux.
-
Dockers 'take-over-system' style of network management will interfere with proxmox networking.
-
Regardless of VM or LXC, I would only install docker once. There's generally no need to create multiple docker VMs/LXCs on the same host. Unless you have a specific reason; like isolating outside traffic by creating a docker setup for only public services.
Backups are the same with VM or LXC on Proxmox.
The main advantages of LXC that I can think of:
- Slightly less resource overhead, but not much (debian minimal or alpine VM is pretty lightweight already).
- Ability to pass-through directories from the host.
- Ability to pass-through hardware acceleration from a GPU, without passing through the entire GPU.
- Ability to change CPU cores or RAM while it's running.
-
Docker doesn't need to portable because containers are...
I don't even understand this logic.
-
Run Docker at the host level. Every level down from there is not only a knock to performance across the spectrum, it just makes a mess of networking. Anyone in here saying "it's easy to backup in a VM" has completely missed the point of containers, and apparently does not understand how to work with them.
You shouldn't ever need to backup containers, and if you're expecting data loss if one goes away, yerdewinitwrawng.
-
I use individual lxc for each docker compose so I don't have to revert 8 services at once if I need to restore.
I would also argue that an alpine lxc runs in 22mb ram by itself ... Significantly smaller footprint on disk and in memory. But most importantly, lxc can actually share memory space effectively, one doesn't need to reserve blocks of ram.
-
Lxc and docker are not equivalent. They are system and software containers respectively.
-
If you use Live Migrate, realize that it doesn't work on an LXC, only VMs. Your containers will be restarted with the LXC on the new node.
-
I used to use LXC, and switched to VM since internet said it was better.
I kinda miss the LXC setup. Day to day I don't notice any difference, but increasing storage space in VM was a small pain compared to LXC. In VM I increased disk size through proxmox, but then I had to increase the partition inside VM.
In LXC you can just increase disk size and it immediately is available to the containers
-
This thread has raised so many questions I'd like answered:
- Why are people backing up containers?
- Why are people running docker-in-docker?
- I saw someone mention snapshotting containers...what's the purpose of this?
- Why are people backing up docker installs?
Seriously thought I was going crazy reading some of these, and now I'm convinced the majority of people posting suggestions in here do not understand how to use containers at all.
Flat file configs, volumes, layers, versioning...it's like people don't know what these are are how to use them, and that is incredibly disconcerting.
-
Because a lot of people don’t learn docker, they install docker because some software they want to use is distributed that way.
-
Dont listen to them! The main issue with containers vs vm is security as you lxc runs in the hosts, while a vm runs on the host.
Use what you are familiar with and remember that lxc are containers and docker are containers, but the use of them are vastly different.
-
You dont need or want docker on your vm host. But a bare metal docker host can solve many peoples needs.
-
Yup, this is me exactly. I’ve been planning on going more indepth but haven’t found the time. Inunderstand Linux and how to use LXCs, docker less so.
-
What in the world are you talking about? It's literally the entire point of containers orchestration systems, and the reason why you don't run containers inside containers. It's makes zero sense.
-
It’s a dedicated server (a small Dell micro-pc). Thanks for the comment, I understand the logic, I was approaching it more from an end-user perspective of what’s easier to work with. Which given my skill set are LXC containers. I have a VM on top of Proxmox specifically for Docker
-
Honestly, I never really thought of installing Docker directly on Proxmox. I guess that might be a simpler solution, to run Dockers directly, but I kind of like to keep the hypervisor more stripped down.
-
Follow-up question: do you have any good resources to start with for a simple overview on how we should be using containers? I’m not a developer, and from my experiences most documentation on the topic I’ve come across targets developers and devops people. As someone else mentioned, I use docker because it’s the way lots of things happen to be packaged - I’m more used to the Debian APT way of doing things.
-
I don't have anything handy, but I see your point, and I'd shame lazy devs for not properly packaging things maybe
You mentioned you use Proxmox, which is already an abstraction on bare-metal, so that's about as easy as easy an interface as I can imagine for a hosted machine without using something like Docker Desktop and using it to manage a machine remotely (not a good idea).
As a develop, I guess I was slightly confused on some suggestions on ways to use things being posted in this sub, but some of the responses I guess clarify that. There isn't enough simplicity in explaining the "what" of containers, so people just use them the simplest way they understand, which also happens to be the "wrong way". It's kind of hard to grasp that when you live with these things 24/7 for years. Kind of a similar deal with networking solutions like Tailscale where I see people installing it everywhere and not understanding why that's a bad idea
So save you a lot of learning, I'll just not go down a rabbit hole if you just want something to work well. Ping back here if you get into a spot of trouble, and I'll definitely hop in to give a more detailed explanation on a workflow that is more effective than what it seems most people in here are using.
In fact, I may have just been inspired to do a write up on it.