Context: Docker bypasses all UFW firewall rules
-
Explicitly binding certain ports to the container has a similar effect, no?
It's better than nothing but I hate the additional logs that came from it constantly fighting firewalld.
-
have to make sure of is that the deployment environment has node and the angular CLI installed
I have spent so many fucking hours trying to coordinate the correct Node version to a given OS version, fucked around with all sorts of Node management tools, ran into so many glibc compat problems, and regularly found myself blowing away the packages cache before Yarn fixed their shit and even then there's still a serious problem a few times a year.
No. Fuck no, you can pry Docker out of my cold dead hands, I'm not wasting literal man-weeks of time every year on that shit again.
(Sorry, that was an aggressive response and none of it was actually aimed at you, I just fucking hate managing Node.js manually at scale.)
Well, I guess that's a good reason. Node version management seems to handle most of that for me though. I haven't worked on an OS without support for it.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Well yea ofc it works like that, the services are not on the same network, so the packets need to be sent onto another adapter. That means either nat or forwarding tables.
Now if that was a good design of docker is another question.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
Even when it seems like an app runs identically on every platform, you can easily run into issues down the road. If you have a well configured docker image, that issue is just solved ahead of time. Hell, I find it worth messing with just moving a node.js app between Linux boxes, which would experience the least issues I can think of.
-
This is less of an issue with JS, but say you're developing this C++ application. It relies on several dynamically linked libraries. So to run it, you need to install all of these libraries and make sure the versions are compatible and don't cause weird issues that didn't happen with the versions on the dev's machine. These libraries aren't available in your distro's package manager (only as RPM) so you will have to clone them from git and install all of them manually. This quickly turns into hassle, and it's much easier to just prepare one image and ship it, knowing the entire enviroment is the same as when it was tested.
However, the primary reason I use it is because I want to isolate software from the host system. It prevents clutter and allows me to just put all the data in designated structured folders. It also isolates the services when they get infected with malware.
Ok, see the sandboxing makes sense and for a language like C++ makes sense. But every other language I used it with is already portable to every OS I have access to, so it feels like that defeats the benefit of using a language that's portable.
-
have to make sure of is that the deployment environment has node and the angular CLI installed
I have spent so many fucking hours trying to coordinate the correct Node version to a given OS version, fucked around with all sorts of Node management tools, ran into so many glibc compat problems, and regularly found myself blowing away the packages cache before Yarn fixed their shit and even then there's still a serious problem a few times a year.
No. Fuck no, you can pry Docker out of my cold dead hands, I'm not wasting literal man-weeks of time every year on that shit again.
(Sorry, that was an aggressive response and none of it was actually aimed at you, I just fucking hate managing Node.js manually at scale.)
I agree, for any context that it makes sense, docker is so worth it
-
I've been playing with systemd-nspawn for my containers recently, and I've been enjoying it!
Try podman and quadlets
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed
That's why Docker is popular. Making sure every single system running your app has the correct versions of node and angular installed is a royal pain in the butt.
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
No it's popular because it allows people/companies to run things without needing to deal with updates and dependencies manually
-
Feels weird that an application is allowed to override iptables though. I get that when it's installed with root everything's off the table, but still....
Linux lets you do whatever you want and that's a side effect of it, there's nothing preventing an app from messing with things it shouldn't.
-
If I had a nickel for every database I've lost because I let docker broadcast its port on 0.0.0.0 I'd have about 35ยข
How though? A database in Docker generally doesn't need any exposed ports, which means no ports open in UFW either.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
This only happens if you essentially tell docker "I want this app to listen on 0.0.0.0:80"
If you don't do that, then it doesn't punch a hole through UFW either.
-
How though? A database in Docker generally doesn't need any exposed ports, which means no ports open in UFW either.
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
wrote last edited by [email protected]Think of it more like pre-canned build scripts. I can just write a script (
DockerFile
), which tells docker how to prepare the environment for my app. Usually, this is just pulling the pre-canned image for the app, maybe with some extra dependencies pulled in.This builds an image (a non-running snapshot of your environment), which can be used to run a container (the actual running app)
Then, i can write a config file (
docker-compose.yaml
) which tells docker how to configure everything about how the container talks to the host.- shared folders (volumes)
- other containers it needs to talk to
- network isolation and exposed ports
The benefit of this, is that I don't have to configure the host in any way to build / host the app (other than installing docker). Just push the project files and docker files, and docker takes care of everything else
This makes for a more reliable and dependable deploy
You can even develop the app locally without having any of the devtools installed on the host
As well, this makes your app platform agnostic. As long as it has docker, you don't need to touch your build scripts to deploy to a new host, regardless of OS
A second benefit is process isolation. Should your app rely on an insecure library, or should your app get compromised, you have a buffer between the compromised process and the host (like a light weight VM)
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
That's only a side effect. It mainly got popular because it is very easy for developers to ship a single image that just works instead of packaging for various different operating systems with users reporting issues that cannot be reproduced.
-
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
wrote last edited by [email protected]For local access you can use
127.0.0.1:80:80
and it won't put a hole in your firewall.Or if your database is access by another docker container, just put them on the same docker network and access via container name, and you don't need any port mapping at all.
-
Ok
So, confession time.
I don't understand docker at all. Everyone at work says "but it makes things so easy." But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it's all spaghetti in the end anyway.
If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?
I put off docker for a long time for similar reasons but what won me over is docker volumes and how easy they make it to migrate services to another machine without having to deal with all the different config/data paths.
-
ufw just manages iptables rules, if docker overrides those it's on them IMO
Not really.
Both docker and ufw edit iptables rules.
If you instruct docker to expose a port, it will do so.
If you instruct ufw to block a port, it will only do so if you haven't explicitly exposed that port in docker.
Its a common gotcha but it's not really a shortcoming of docker.
-
network: host
gives the container basically full access to any port it wants. But even with other network modes you need to be careful, as any-p <external port>:<container port>
creates the appropriate firewall rule automatically.I just use caddy and don't use any port rules on my containers. But maybe that's also problematic.
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
I dont really understand the problem with that?
Everyone is a script kiddy outside of their specific domain.
I may know loads about python but nothing about database management or proxies or Linux. If docker can abstract a lot of the complexities away and present a unified way you configure and manage them, where's the bad?