Principles for Code Reviews and Reviewers

Last month I was blogging about how code review is dead. Today I want to share some principles and practices to make things better. In the previous post I shared some actions, now let's understand the principles behind many of these actions. 

First of all, can we save code reviews, and make them better, for sure we can. It's not that hard, for the code review it self-improve we need a couple of things in place. Usually, engineers have the same complaints like It takes a long time to do code reviews, there are too many files, I don't know this code or this service, etc... The fact is there are multiple mindset problems here. 

Principles behind code improvements

The code will not get better by just addressing style concerns. Most of the code reviews focus on the wrong things. What we need to keep in mind is, we want the code to be easier to understand and maintain and allow us to change and evolve over time.

Stylistic changes don't add value. The basic principle behind everything must be to have a better design. A great design improves maintainability and allows us to work in a sustainable way. Improving the design should be the most important thing.

Tests are very important, testing allows us to have confidence and drive fear out, the biggest fear engineers have is breaking somebody else codes, generating a critical bug that affects the system and stops working in mysterious ways. This fear kills the system slowly, we need to fight this fear. 

One way to fight such fear is by doing refactorings, perhaps one of the most effective ways. However you won't feel confident if there are no tests, tests are poorly written or we don't have decent coverage. So tests need to be increasing over time, and this needs to be measured, XP is used to measure tests with big visible charts. Besides the charts a killer tool to improve tests is code reviews, we need to make sure tests are improving while we do code reviews. Do not merge PRs without tests, no matter the pressure you think you have.

Better software design, better tests, and refactoring all work together however this is one state of the system. I believe systems have two states: development and production. All items we said so far are for development, however, we need to make sure we improve things in production as well. To improve things in production we need to know what's going on with our systems, so this leads us to observability.

Observability is a form of testing. When we improve observability, we better understand what's going on with our systems in normal scenarios, failure scenarios, gray failure scenarios, and many other chaotic circumstances. By doing so we can operate our systems better deliver a better user experience and also be happier maintaining our systems. 

Code reviews need to look into this dimension as well, during code review, we can identify key metrics that are not exposed and should be exposed, lack of alerts, missing alerts, alerts without owners, lack of dashboards, lack of automation, etc... All good things to be check during code reviews. 

At the end of the day is all about excellence, in my own pyramid you can see there is some space with "..." perhaps one of the learnings watching Ted Lasso. We need to have space for improvement, if I gave you a pyramid with all answers this would be wrong for a couple of reasons:

  • No person was all the answers
  • We do not want to create a static culture, things need to be dynamic always improving
  • Invites and spikes curiosity so you think of other key principles we might be missing here
  • Maybe your team created your own pyramid and thinks is missing

Principles behind improving the reviewer

To have better reviews, we need to have better reviewers, how can we do a better job reviewing code without becoming better engineers? The two things are together. One reviewer might do a much better job than the other reviewer, you need to ask yourself, why?  Clearly, one person is more senior than the other and has different experiences, how can we spread this knowledge around. 

We need to teach people to what look for when they are doing code reviews. IMHO one of the best tools to do this is a checklist. I'm a big fan of big checklists with hundreds of items, the one right away will say, OH is too big, takes a lot of time, why don't we automate it? 

It's too big, and takes a lot of time: Are we reviewing it or not? It takes the time that it takes. This is an investment not a waste of time. After all, poor code generates bugs that disrupt the user experience and then you would regret not spending that time doing a proper review. 

Why don't we automate it all? Sometimes is not so easy to automate. The question you need to ask yourself is, who has the knowledge? You or the automation script? or chatGPT? We need to improve the reviewer, the reviewer needs to be trained to look at things he was not looking at before. 

All Engineers who open PRs and do code reviews should be using the same checklists and posting checklists and comments in PRs. By doing so we can see what's going on. In order to improve anything, we need to have discipline, there is nothing good that comes without hard work and discipline. 

Posting checklists and comments on PRs will allow us to share the PR link with other people and other engineers where they can take their own conclusions. It's a great way to be able to debug the process and see if is improving over time or not. 

Visibility is a key principle, we cant improve problems that are hidden, evil can only be effective if is hidden, so the first step to fix something is to make it visible, we can see what reviewers are doing and give them feedback, thanks to checklists and comments on PRs. 

The next principle is collaboration, checklists should be public and have collective ownership(another XP principle). You should consider adding the checklists in a GitHub repository so all developers can do PRs and discuss items on the checklist, overtime items should come and go. 

All leads to excellence and having better reviewers, as you can see Ted Lasso again, I'm not giving the full pyramid because your team should be discussing this thing and making it alive, better, and better as time passes.

Continuous Improvements 

We should never stop, one thing leads to another, we have a system that elements retrofit each other in a continuous improvement non-ending cycle. Code Review makes the code better, Checklists allow us to apply discipline and therefore improve the code review and the reviewer, during constant retrospectives we assess and re-evaluate what we are doing and improve the checklists and the reviewers again, it's a nonending cycle that results in better code and better digital products. 

As time passes, teams will learn and improve, could be that a checklist is not needed anymore, but one thing we need to keep in mind is, that teams keep changing, people come and go, so team maturity is never a constant unless you can keep the same people on the team for a long time. Eventually, things change, so this should and needs to be a never-ending process.

Cheers,

Diego Pacheco

Popular posts from this blog

Kafka Streams with Java 15

Rust and Java Interoperability

HMAC in Java