Measuring Code Review Quality through PR Rejections

Today, I watched a new YouTube video* of someone arguing that you should measure code review quality, i.e., how good your code reviews are by the number of PR rejections.

The person argues that without rejections, we do not know that people actually reviewed the code.

According to them, many rejections signal a good code review process. But, of course, you should not reject every code change.

So, how many rejects are a sign that your code review process is through? He thinks that a 5% rejection rate is too low. In a professional team, he thinks if 20-30% of the code reviews are rejected this is a sign of a thorough code review process.

He continues to explain that if you rejected 5% of PRs last month and I rejected 30% of PRs, then obviously, I am a more effective reviewer than you are.

Well, measuring code review quality with a metric like that will definitely cause a change to the code review practices of your team. But not in a good way.

Using rejections to measure code review quality destroys team culture

So, let me be very clear here with my perspective: After working with hundreds of product teams to improve their code review practices, my suggestion is to please never ever introduce such a code review metric in your team.

Even with rubberstamping your whole code changes, or abandoning code reviews entirely, you will be WAY better off.

Code reviews are not about rejecting the changes of others. Well, indeed we sometimes find problems ore flaws in the code of our peers.

But the actual power of code reviews comes from learning from each other and mentoring each other.

Introducing such code review metrics, or even just promoting this mindset, destroys your team culture beyond repair. It’s competitive, unproductive, and misses the point – even if your only motivation for code reviews is to find bugs.

If you see a high rejection rate in your team, then instead of celebrating, it’s time to act. And you should act fast Find out what is holding people back from submitting acceptable PRs. This is a troublesome sign.

Why aren’t they getting help from their peers? Why aren’t developers able to submit acceptable code changes? Should user stories be more clear?

If you reward uncooperative behavior, you get uncooperative behavior

Well, maybe people aren’t able to submit acceptable PRs, because you reward a competitive culture, where one person profits from mistakes others make by counting rejections as a sign of code review quality.

I’ve seen many crooked ideas about measuring quality, productivity, & effectiveness in code reviews. This is definitely in my top 10! If you know you have no good code review metric at hand, then the worst thing you can do is come up with ANY metric just because you want to measure and feel in control.

Well, so far my perspective. What do you think about it, and did you ever try to measure code review effectiveness?

I prepared an exclusive Code Review e-Book for my e-mail subscribers to help you remember the code review best practices. I also added other great insights and summaries about code reviews. Get the 20-page insights to code reviews now. Not a subscriber yet? Just sign-up.

* I deliberately did not link the YouTube video or mention their name.

Dr. Michaela Greiler

I make code reviews your superpower.

2 thoughts on “Measuring Code Review Quality through PR Rejections

  • November 30, 2020 at 16:58

    Great perspective. An underrated factor in getting a team to work together efficiently is that the environment is a pleasant one that individuals WANT to work together within. Love this article!

  • December 15, 2020 at 18:29

    This reminds me of a story—and I apologize that I don’t remember where I heard about it, and I probably also am remembering it completely wrong. In order to reduce its bug count, a software organization rewarded QA people for finding bugs and rewarded developers for fixing them. Very quickly, devs started to inject bugs into the code and then tell QA where to find them. Everyone went home happy.

    What you measure, that’s what you’re likely to get.

    I would be afraid that if we reward developers for rejecting PR’s, then the culture will start providing PRs that can be rejected on superfluous grounds, to make it look like everyone is becoming a much better code reviewer. But the end product won’t be any better, and business value will suffer.


Leave a Reply

Your email address will not be published. Required fields are marked *