Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If scale is a function of read/writes, very large. In fact with relatively minimal (virtual) hardware it's not insane to see a cluster doing around 1M writes/second.


I was talking more about large file storage like HDFS, and the MapReduce model of bringing computation to data. HBase does the latter, and it's strongly consistent like FoundationDB, though FoundationDB provides better guarantees. As a K/V I understand what you and OP say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: