Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Counts become incredibly slow on some databases like Postgres. It needs to do a sequential scan, which results in slow(er) performance on even a ten- thousands of rows. When operating at scale, this is very, very slow.


From what I understand it's not slow if it can use an index, but the criteria for that are rather hard to understand as a normal user. If you vacuum often enough and the visibility map is current, it doesn't need a full table scan.


Isn't count slow on most databases? Either it ends up in table scan or index scan. But if the index is huge, then that will also take a lot of time.

I wonder how do Big Co solve this counting problem.


This seems unlikely to depend on which database software. It could easily depend on what indexes exist for the query.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: