Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So what? Qualitatively, it feels like PVS-Studio's false positive rate is higher than Coverity. And far higher than a modern free compiler's -Wall.

To describe the 27,000 detections as errors is a little misleading, and I think that's what the grandparent post is discussing.



This is not the way you think and it is not nice to mislead people. It all depends on the project.

I've heard people who said that Coverity gives many false positives and Cppcheck gives few. I've heard the opposite, that it is impossible to use Cppcheck because of the huge number of false positives, but Coverity is doing great. I heard that Coverity gives more false positives than PVS-Studio. And vice versa. And so on and so forth. What is the reason for such differences?

Projects have their certain styles of writing and different kinds of macros. These are macros and peculiarities of style that become the main source of false positives. This is why the first impression of using the analyzers of code depends on luck, and not on the coolness of the analyzer. If the analyzer doesn’t like a self-made my_assert() it will issue 10000 false positives.

So there is no point in talking abstractly about the number of false positives. Yes, you can not get lucky and there can be a lot of false positives. However, the static code analyzers allow you to configure them. In articles I have showed many times that the simplest configuration of the analyzer can greatly reduce the number of false positives. Example: https://www.viva64.com/en/b/0496/#ID0ENNAC




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: