Today, I watched a new YouTube video* of someone arguing that the quality of your code review process should be measured by PR rejections.
Because, without rejections, how should we know that people actually reviewed the code?
So, according to them, how many rejections signal a good code review process?
Well, he isn’t completely sure, but definitely not 5%. In a professional team, he thinks more like 20-30%.
This shows you have a good code review process, he says. He continues to explain that “if you rejected 5% of PRs last month and he rejected 30% of PRs, then obviously, he is a more effective reviewer than you.”
Well, this metric will definitely trigger a change to the code review practices of your team. But not in a good way.
Rejections as a quality metric destroy team culture
So, let me be very clear here with my perspective: After working with hundreds of product teams to improve their code review practices, my suggestion is to please never ever introduce such a metric in your team.
Even with rubberstamping your whole code changes, or abandoning code reviews as a whole, you will be WAY better off.
Code reviews are not about rejecting the changes of others. Code reviews are not there to find flaws in the code of your peers.
The real power of code reviews comes from learning from each other, and mentoring each other.
Introducing such metrics, or even just promoting this mindset, destroys your team culture beyond repair. It’s competitive, unproductive, and misses the point – even if your only motivation for code reviews is to find bugs.
If you see a high rejection rate in your team then instead of celebrating, it’s time to act. NOW! Find out what is holding people back from submitting acceptable PRs. This is a troublesome sign.
Why aren’t they getting help from their peers? Should user stories be more clear?
If you incentivize uncooperative behavior, you get uncooperative behavior
Well, maybe people aren’t able to submit acceptable PRs, because you incentivize a competitive culture, where one person profits from mistakes others make by counting rejections as a sign of code review quality.
I’ve seen many crooked ideas about measuring quality, productivity, & effectiveness. This is definitely in my top 10! If you know you have no good metric at hand, then the worst thing you can do is come up with ANY metric just because you want to measure and feel in control.
Well, so far my perspective. What do you think about it, and (how) did you attempt to measure code review effectiveness?
I prepared an exclusive Code Review e-Book for my e-mail subscribers to help you remember the code review best practices. I also added other great insights and summaries about code reviews. Get the 20-page insights to code reviews now. Not a subscriber yet? Just sign-up.
* I deliberately did not link the YouTube video or mention their name.