Context: Docker bypasses all UFW firewall rules
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Docker does not play fair, does not play nice. It’s a dozer that plow through everything for devops that yolo and rush to production.
-
My problem with podman is the incompatibility with portainer
Any recommendations?
CLI and Quadlet? /s but seriously, that's what I use lol
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
If I had a nickel for every database I've lost because I let docker broadcast its port on 0.0.0.0 I'd have about 35¢
-
My problem with podman is the incompatibility with portainer
Any recommendations?
cockpit has a podman/container extension you might like.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
I've been playing with systemd-nspawn for my containers recently, and I've been enjoying it!
-
My problem with podman is the incompatibility with portainer
Any recommendations?
I assume portainer communicates via the docker socket? If so, couldn’t you just point portainer to the podman socket?
-
It’s my understanding that docker uses a lot of fuckery and hackery to do what they do. And IME they don’t seem to care if it breaks things.
I don’t know how much hackery and fuckery there is with docker specifically. The majority of what docker does was already present in the Linux kernel namespaces, cgroups etc. Docker just made it easier to build and ship the isolated environments between systems.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Wait, that's illegal
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
I mean if you're hosting anything publicly, you really should have a dedicated firewall
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Somehow I think that's on ufw not docker. A firewall shouldn't depend on applications playing by their rules.
-
Somehow I think that's on ufw not docker. A firewall shouldn't depend on applications playing by their rules.
ufw just manages iptables rules, if docker overrides those it's on them IMO
-
Somehow I think that's on ufw not docker. A firewall shouldn't depend on applications playing by their rules.
Docker spesifically creates rules for itself which are by default open to everyone. UFW (and underlying eftables/iptables) just does as it's told by the system root (via docker). I can't really blame the system when it does what it's told to do and it's been administrators job to manage that in a reasonable way since forever.
And (not related to linux or docker in any way) there's still big commercial software which highly paid consultants install and the very first thing they do is to turn the firewall off....
-
ufw just manages iptables rules, if docker overrides those it's on them IMO
Feels weird that an application is allowed to override iptables though. I get that when it's installed with root everything's off the table, but still....
-
cockpit has a podman/container extension you might like.
wrote last edited by [email protected]It's okay for simple things, but too simple for anything beyond that, IMO. One important issue is that unlike with Portainer you can't edit the container in any way without deleting it and configuring it again, which is quite annoying if you just want to change 1 environment variable (GH Issue). Perhaps they will add a quadlet config tool to cockpit sometime in the future.
-
Feels weird that an application is allowed to override iptables though. I get that when it's installed with root everything's off the table, but still....
It is decidedly weird, and it's something docker handles very poorly.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
You’re right. As an old-timey linux user I find it more confusing than running the services directly, too. It’s another abstraction layer that you need to manage and which has its own pitfalls.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
have to make sure of is that the deployment environment has node and the angular CLI installed
I have spent so many fucking hours trying to coordinate the correct Node version to a given OS version, fucked around with all sorts of Node management tools, ran into so many glibc compat problems, and regularly found myself blowing away the packages cache before Yarn fixed their shit and even then there's still a serious problem a few times a year.
No. Fuck no, you can pry Docker out of my cold dead hands, I'm not wasting literal man-weeks of time every year on that shit again.
(Sorry, that was an aggressive response and none of it was actually aimed at you, I just fucking hate managing Node.js manually at scale.)
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
wrote last edited by [email protected]This is less of an issue with JS, but say you're developing this C++ application. It relies on several dynamically linked libraries. So to run it, you need to install all of these libraries and make sure the versions are compatible and don't cause weird issues that didn't happen with the versions on the dev's machine. These libraries aren't available in your distro's package manager (only as RPM) so you will have to clone them from git and install all of them manually. This quickly turns into hassle, and it's much easier to just prepare one image and ship it, knowing the entire enviroment is the same as when it was tested.
However, the primary reason I use it is because I want to isolate software from the host system. It prevents clutter and allows me to just put all the data in designated structured folders. It also isolates the services when they get infected with malware.