Context: Docker bypasses all UFW firewall rules
-
Also when using a rootfull Podman socket?
When running as root, I did not need to add the firewall rule.
-
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like "you can follow these thirty steps to make Docker secure, or just run Podman instead." https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
Another take: Why should I care about dependency hell if I can just spin up the same service on the same machine without needing an additional VM and with minimal configuration changes.
-
Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Nat is not security.
Keep that in mind.
It's just a crutch ipv4 has to use because it's not as powerful as the almighty ipv6
-
Try podman and quadlets
What advantage does it have over nspawn?
-
Nat is not security.
Keep that in mind.
It's just a crutch ipv4 has to use because it's not as powerful as the almighty ipv6
-
When running as root, I did not need to add the firewall rule.
Thanks for checking
-
The VPS I'm using unfortunately doesn't offer an external firewall
Well, if you have the option you could set up a virtual network through the VPS and have a box with pfsense or something to route all traffic through. Take this with a grain of salt - I've seen this done but never done it fully myself.
-
Well, if you have the option you could set up a virtual network through the VPS and have a box with pfsense or something to route all traffic through. Take this with a grain of salt - I've seen this done but never done it fully myself.
I've just disabled all incoming connections (including SSH etc.) and access everything through WireGuard
-
Or maybe it should be easy to configure correctly?
instructions unclear, now its hard to use and to configure
-
I mean if you're hosting anything publicly, you really should have a dedicated firewall
have a dedicated firewall
I mean, don’t router firewalls count in this regard? Isn’t that kinda part of their job?
-
I think linux does fstrim oob.
edit: I meant to say linux distros are set up to do that automatically.
wrote last edited by [email protected]It’s been about a day since this issue and now I’ve been keeping a close eye on my local-lvm, it fills fast, like, ridiculously fast and I’ve been having to run
sudo fstrim /
inside the VM just to keep it maintained. I’m finding it weird I’m now just noticing this as this server has been running for months!For now I edited my
/etc/bash.bashrc
so whenever I ssh in it’ll automatically runsudo fstrim /
, there is something I’m likely missing but this works as a temporary solution. -
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
Where are you working that your local machine is regularly exposed to malicious traffic?
-
Where are you working that your local machine is regularly exposed to malicious traffic?
My use case was run a mongodb container on my local, while I run my FE+BE with fast live-reloading outside of a container. Then package it all up in services for docker compose on the remote.
-
For local access you can use
127.0.0.1:80:80
and it won't put a hole in your firewall.Or if your database is access by another docker container, just put them on the same docker network and access via container name, and you don't need any port mapping at all.
Yeah, I know that now lol, but good idea to spell it out. So what Docker does, which is so confusing when you first discover the behaviour, is it will bind your ports automatically to
0.0.0.0
if all you specify is27017:27017
as you port (without an IP address prefixing). AKA what the meme is about. -
My use case was run a mongodb container on my local, while I run my FE+BE with fast live-reloading outside of a container. Then package it all up in services for docker compose on the remote.
Ok… but that doesn’t answer my question. Where are you physically when you’re working on this that people are attacking exposed ports? I’m either at home or in the office, and in either case there’s an external firewall between me and any assholes who want to exploit exposed ports. Are your roommates or coworkers those kinds of assholes? Or are you sitting in a coffee shop or something?
-
Ok… but that doesn’t answer my question. Where are you physically when you’re working on this that people are attacking exposed ports? I’m either at home or in the office, and in either case there’s an external firewall between me and any assholes who want to exploit exposed ports. Are your roommates or coworkers those kinds of assholes? Or are you sitting in a coffee shop or something?
wrote last edited by [email protected]This was on a VPS (remote) where I didn't realise Docker was even capable of punching through UFW. I assumed (incorrectly) that if a port wasn't reversed proxied in my nginx config, then it would remain on localhost only.
Just run
docker run -p 27017:27017 mongo:latest
on a VPS and check the default collections after a few hours and you'll likely find they're replaced with a ransom message. -
This was on a VPS (remote) where I didn't realise Docker was even capable of punching through UFW. I assumed (incorrectly) that if a port wasn't reversed proxied in my nginx config, then it would remain on localhost only.
Just run
docker run -p 27017:27017 mongo:latest
on a VPS and check the default collections after a few hours and you'll likely find they're replaced with a ransom message.Ah, when you said local I assumed you meant your physical device