Context: Docker bypasses all UFW firewall rules
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
No it's popular because it allows people/companies to run things without needing to deal with updates and dependencies manually
-
Feels weird that an application is allowed to override iptables though. I get that when it's installed with root everything's off the table, but still....
Linux lets you do whatever you want and that's a side effect of it, there's nothing preventing an app from messing with things it shouldn't.
-
If I had a nickel for every database I've lost because I let docker broadcast its port on 0.0.0.0 I'd have about 35ยข
How though? A database in Docker generally doesn't need any exposed ports, which means no ports open in UFW either.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
This only happens if you essentially tell docker "I want this app to listen on 0.0.0.0:80"
If you don't do that, then it doesn't punch a hole through UFW either.
-
How though? A database in Docker generally doesn't need any exposed ports, which means no ports open in UFW either.
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
wrote last edited by [email protected]Think of it more like pre-canned build scripts. I can just write a script (
DockerFile
), which tells docker how to prepare the environment for my app. Usually, this is just pulling the pre-canned image for the app, maybe with some extra dependencies pulled in.This builds an image (a non-running snapshot of your environment), which can be used to run a container (the actual running app)
Then, i can write a config file (
docker-compose.yaml
) which tells docker how to configure everything about how the container talks to the host.- shared folders (volumes)
- other containers it needs to talk to
- network isolation and exposed ports
The benefit of this, is that I don't have to configure the host in any way to build / host the app (other than installing docker). Just push the project files and docker files, and docker takes care of everything else
This makes for a more reliable and dependable deploy
You can even develop the app locally without having any of the devtools installed on the host
As well, this makes your app platform agnostic. As long as it has docker, you don't need to touch your build scripts to deploy to a new host, regardless of OS
A second benefit is process isolation. Should your app rely on an insecure library, or should your app get compromised, you have a buffer between the compromised process and the host (like a light weight VM)
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
That's only a side effect. It mainly got popular because it is very easy for developers to ship a single image that just works instead of packaging for various different operating systems with users reporting issues that cannot be reproduced.
-
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
wrote last edited by [email protected]For local access you can use
127.0.0.1:80:80
and it won't put a hole in your firewall.Or if your database is access by another docker container, just put them on the same docker network and access via container name, and you don't need any port mapping at all.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
I put off docker for a long time for similar reasons but what won me over is docker volumes and how easy they make it to migrate services to another machine without having to deal with all the different config/data paths.
-
ufw just manages iptables rules, if docker overrides those it's on them IMO
Not really.
Both docker and ufw edit iptables rules.
If you instruct docker to expose a port, it will do so.
If you instruct ufw to block a port, it will only do so if you haven't explicitly exposed that port in docker.
Its a common gotcha but it's not really a shortcoming of docker.
-
network: host
gives the container basically full access to any port it wants. But even with other network modes you need to be careful, as any-p <external port>:<container port>
creates the appropriate firewall rule automatically.I just use caddy and don't use any port rules on my containers. But maybe that's also problematic.
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
I dont really understand the problem with that?
Everyone is a script kiddy outside of their specific domain.
I may know loads about python but nothing about database management or proxies or Linux. If docker can abstract a lot of the complexities away and present a unified way you configure and manage them, where's the bad?
-
Try podman and quadlets
Quadlets are so good.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
Sure but thats an angular app, and you already know how to manage its environment.
People self host all sorts of things, with dozens of services in their home server.
They dont need to know how to manage the environment for these services because docker "makes everything so easy".
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
On windows (coughing)
-
Explicitly binding certain ports to the container has a similar effect, no?
I still need to allow the ports in my firewall when using podman, even when I bind to 0.0.0.0.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
I pretty much share the same experience. I avoid using docker or any other containerizing thing due to the amount of bloat and complexity that this shit brings. I always get out of my way to get Software running w/o docker, even if there is no documented way. If that fails then the Software just sucks.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
That is definitely one of the crowds but there are also people like me that just are sick and tired of dealing with python, node, ruby depends. The install process for services has only continued to become increasingly more convoluted over the years. And then you show me an option where I can literally just slap down a compose.yml and hit "docker compose up - d" and be done? Fuck yeah I'm using that
-
It's okay for simple things, but too simple for anything beyond that, IMO. One important issue is that unlike with Portainer you can't edit the container in any way without deleting it and configuring it again, which is quite annoying if you just want to change 1 environment variable (GH Issue). Perhaps they will add a quadlet config tool to cockpit sometime in the future.
i mean, you can just redeploy the container with the updated variable. thats kinda how they work.