Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As far as I'm aware, we're still getting away with it. Your kernel and every program on your computer and every server is still doing it...

Setting a memory limit on docker containers comes from automation. We built new-fangled automation that can schedule multiple copies of an application across N nodes. Then we tell people they can run more and more copies of the app, for CI/CD, for test/stage/prod, for Joe's Test App, etc. All these copies are using up memory. At some point, somebody tries to deploy to prod, which needs to start new copies of the new versions. But now they can't because the paultry few nodes have now run out of memory from all these extra copies of the apps. So now we have to impose limits or we can't deploy to prod.

(you might ask: "why don't we just automate expanding the nodes to accommodate more memory?" and they did. and it doesn't work well, for reasons that give me a headache)

(you might also ask: "why don't they just make an OOM killer for the automated cluster?" and they did. and it doesn't work very well either, for other headachy reasons)

"Software is a gas: it always expands to fit whatever container it is stored in." - Nathan Myhrvold (https://web.archive.org/web/19990202072225/http://research.m...)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: