I haven't seen any research that supports either. I wouldn't be surprised to learn that code review does nothing other than share knowledge, or to learn that it reduces bugs by 50%. I just honestly have no idea, I do code reviews, because we do code reviews.
In fact I have seen alarmingly little research about code management. Code review, standup, agile, etc. does any of it do anything useful? I only came across anecdotal evidence which can be dismissed.
Perhaps unlikely. Or really surprising. But not absurd.
Imagine some reviewer who rejects all code reviews that don't have some recognizable pattern in them from the gang of four book. Fully complete and working code gets hacked up last minute to accommodate this person who has more seniority than sense.
Or ... maybe someone who always want there to be a single return in every function. Or heck, someone who demands that you always do early return. Either way they demand that some carefully crafted code is swapped over to the equivalent dual. And during that transformation a mistake is make that nobody notices.
I think the point is that the science way to indicate that code reviews work is to actually do the experiment. Instead of just saying, "why that's absurd that a code review could cause more defects."
I mean isn't that how hand washing got introduced into medicine? "Hey everyone, let's try washing our hands!" "Why, the hands of a gentleman are always clean. It's absurd to suggest that we should wash our hands."
I was on a team of competent people working on a dense code base before and after introduction of gatekeeping code reviews. The code base was large so no two people would be working in the same area. Before code reviews:
- Team members working on a section of code would find and fix bugs in it.
- Team members would watch the repository history to monitor for any mistakes.
When gatekeeping reviews were introduced, they were not particularly effective at finding bugs because bugs tended to be subtle and required time working on the particular code in question to identify. But the introduction of gatekeeping code review caused bugs because:
- Once reviewed, code was assumed to be good enough because hey, two people agreed on it.
- Any bugs identified would require code reviews to fix which would be an uphill battle and probably involve project management.
- Nobody's looking that closely at history because who wants to deal with finding problems and pushing through fixes? And hey, it's already reviewed.
- Knowledge of code that is being reviewed becomes stale as you start working on other features. This impacts pre-commit testing as you make changes to mollify the reviewer, and thus lead to a bug.
- Lowering velocity lowers the rate of fixing bugs.
- A bug could be fixed but stuck in review, so it's still in the codebase for other developers to run across. You could argue: well, it's documented in JIRA! But bugs can manifest in different ways and affect multiple system.
Why would it be absurd? If there is a process to reduce risks, then people take more risks. I have been guilty of submitting code review when I'm not 100% sure it's perfect, just because I know that there is a code review process. So this definitely exists, not sure how common it is though.
Also, the trick to code reviews is to leave in a few low-hanging obvious bike sheds. The reviewers will tell you what color to paint them. Done.
I'm being slightly sarcastic. But not sarcastic to the point that I haven't done just that. Just as the nail that sticks out gets hammered, the code that is too perfect gets increased scrutiny.
If it catches ANY bugs it is objectively better than no PR/code-review from a bug standpoint. Unless you're arguing that pr/code-review creates more bugs than it solves?
Again, I think the question is value. Is a dev's time best used in code-review vs. something else? It's almost certainly the case that time is better spent elsewhere depending on the develop and needs or the organization.
I think the question is the same one as "do bike helmets reduce bike injuries?" I.e. without code reviews, is everyone more careful? I can't think of any proper academic research that answers the question.
In my anecdotal experience, there is at least one class of bugs that code reviews are good at catching that a large expenditure of self-review effort often doesn't catch: security bugs.
> If it catches ANY bugs it is objectively better than no PR/code-review from a bug standpoint.
No, that's false. That's only true if you also assume that people write the same quality code whether there is a code review process or not. Which might or might not be true, no idea.
if anything I would think that person would write same code but without PR would lost the opportunity of having better quality. People are blind to own mistakes. I can't count how many times I would stare at a code not seeing something and other person would spot problem almost immediately.
I don't have a measure of how many bugs it prevents. But I've caught a few in code reviews and people caught a few of mine too. So, it does prevent some.
In fact I have seen alarmingly little research about code management. Code review, standup, agile, etc. does any of it do anything useful? I only came across anecdotal evidence which can be dismissed.