Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd argue that that's a backwards approach and actually what lead to this hack - building this system around the database servers not being publicly exposed, thinking the database servers weren't publicly exposed, and then when you accidentally publicly expose them (and you will sooner or later, a network is too big a boundary to protect all of it) it's a disaster.

It's better to build every server for public exposure from day 1 and treat all connections as potentially hostile, even if they're coming from the internal network.



Defense in depth / zero trust is definitely the way to go, however it's trivial to prevent a system from having internet access - for this hack to occur, the system had to be deployed with a public IP address directly assigned. NAT based internet access (IGW in AWS) or a private VPC with no IGW and no public on the instance is borderline standard in production cloud deployments these days.

Re: "You will sooner or later..." it's super easy to test for stuff like this with sentinel - I use this and scan dev / stage in my CI pipelines with rapid7 which will SCREAM about stuff like no DB password.


Every step is easy once you think about it; the hard part is spending any attention on it in the first place.

I would definitely say it's more effective to test your existing layers before adding more layers, and I think the "defence in depth" concept leads people astray there. Having multiple porous layers works on a battlefield where attacks are costly; it doesn't work on the internet where if one attack gets through an outer layer then all the other attacks can immediately get through the same way and start hitting the inner layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: