Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The idea is that you don't have to write a DB - it is already there, usually even deployed and monitored by your ops team. Even if it gives absolutely no other value, thats a big plus.

In terms of randomly distributed requests: I'm surprised the topic of "working set" is not part of the question. Even with huge amounts of data, if the working set can be stored in RAM and only rarely swap data in and out, its a much different problem than 5000 disk reads a second. Lots of databases are optimized to efficiently maintain cached data in memory and swap as little as possible and as efficiently as possible.



Requests are evenly distributed, so caching will only be "effective" if it approaches the size of the whole dataset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: