Valio.fi deep dive #3: Review tooling

After my previous post on review policy, let’s have a look at the tools we used for reviewing code.

I have found there to be two approaches to reviewing code. Let’s do a quick comparison first and discuss the tools of the trade next.

Patch-based review Social review
The key concept here is that a developer prepares a set of changes, and publishes it for review. The reviewer focuses on the changes (i.e. “the patch”). 

Reviews typically happen through a system (from an work-item attached unidiff file to a full web-based system).

Comments on the review are often formalized documents, email messages or if a specialized system is used, its comment entries.

The developer presents the code to the reviewer. Some do this in a meeting room where multiple people review simultaneously, most of the time it happens pairwise at the dev’s workstation. 

Reviews tend to focus on code understanding as a whole, and discussion about the code happens in-review. Written documents are sometimes produced, but most organizations rely on verbal communication and the developer’s personal notes.

 

Mind you, this is not review theory, but a very important split when looking at the tools. Most projects can easily benefit for the social approach because it requires less tooling and technical sophistication. My experience is that neither catches everything, and the ideal approach might involve both – but reviewing everything twice is often too expensive in terms of time.

We did both, and the social one was easy

imageSocial reviews happen naturally when a team communicates well. The rigidity of the team organization usually determines if social reviews are specifically scheduled, if comments and their responses etc. are tracked, what kind of audit trail is required to accept a social review from a project management perspective and so on.

We used a reasonably liberal system. We required reviews (see the previous post), but accepted socials just like more formal, patch-based reviews. Either form of review was sufficient as long as the reviewer and developer felt like it. And we had no tools for social reviews. We had no document templates, no scheduling, no way to track the comments.

It worked for a reasonably well-bonded team of nine, and it might work for you. Don’t use lack of tooling as an excuse to not review socially; if tools are necessary for you, you’ll see the need to find them as you go.

Patch-based is a different beast

When you adopt patch-based reviewing, pay attention to the tools. I’ve done this for years with the pure unix diff/patch approach, and would not recommend it for development today. There are tools available, and looking at them is a good idea™. We used Crucible, but let me be crystal clear: We are tool pragmatists. We used Crucible because it was easily available, relatively cheap and worked for us. We don’t claim it was the best solution even for us, much less you. If you have the liberty of choosing your tools, do check it out – but also look at the competition.

But before you go on a spree of tool comparison, let me remind you that your review abilities, processes and social capacity far overwhelm your selection of tools in terms of determining your success with code reviews. If you’re setting up a review system from the scratch, you might spend a day on finding the appropriate tooling, but spend four more thinking about what, how and why are you reviewing.

Enough with the intro, go Crucible

The rest of the post will now discuss our patch-based reviews and the use of Crucible (to which the repository browser FishEye is tightly integrated to). First, I’ll walk you through the typical elements of a review, then discuss some of the tooling experiences.

As discussed before, we were in the post-checkin camp. Therefore, our reviews started with the developer committing the code to our version control (Subversion). We had Crucible and FishEye set up to poll the repository on regular intervals. Thus, on our Crucibile web site, we continously saw the stream of checkins listed.

image

For any of the checkins, one could create a review and pick the reviewers desired. There are multiple ways to configure and use Crucible; we used an approach that forced the author to pick the reviewers, but the reviewers could ask for more people to join. Although we encouraged an open review culture where anyone could review anything, we felt it necessary to identify the reviewers responsible – this would give everybody a clear idea on what’s on their plate (“Oh, I have these four things to review today”).

image

In Crucible, each review request contains the files to be reviewed. Since Crucible is integrated to the version control, these files are picked right from the source control tree, and can be either diffs (changes between two versions) or whole files. In any case, the files are shown in the browser, with coloring for the diff elements.

image

The whole point of reviewing is to get feedback on the code. Crucible’s feedback system is based on two concepts: comments focused on a certain code block (one or multiple lines) and comments on the whole review. The former is an approach to very specific feedback. Crucible also allows replying to the comments, thus generating nested comment/discussion trees.

image

Side note: As you can see from the screenshots, we had English as the code language but Finnish as the working language, and reviews were conducted in Finnish. We didn’t consider find this to be a problem. For us, a code comment is documentation for the future developer (potentially not knowing Finnish); a review comment is discussion between the team now (which is totally Finnish).

We make no recommendation on this – apart from keeping the commenting/reviewing bar as low as possible. Finns generally comment more eagerly in Finnish, but your mileage may vary.

The review process is fairly simple: Every requested reviewer goes through all the files, adds his comments and marks himself as complete. Once every reviewer has completed the review, the author goes through the comments, replies on them, fixes the code and finally closes the review. Rounds of discussion can ensue inbetween, but are often better handled outside Crucible (see below).

Experiences and best practices

Keep your reviews small. In fact, keep your checkins small. If your checkins are small, your reviews are too. Fix one aspect of code in a single checkin, and then review that. Even though Crucible encourages the model of picking a source control checkin and clicking “Create Review”, you can and should split checkins for review. I’ve written about this in detail back in 2004, so why repeat myself?

Review layers of trust or reuse separately. For example, if you develop a feature that needs new general use tools you implemented in the tools library, consider reviewing the tools separately. Separate review of the tools forces more focus on the general usability of the tools as the time-pressed reviewer doesn’t have the inclination to just look at how they function at the specific call site. However, you don’t necessarily want to take this too far; reviewing user interface and related data access in one go is probably a good idea.

Review unit tests. If your code has unit tests, include them in the review – and review them. In fact, quite a lot of attention should be paid to the tests, as they are a good indicator of various code issues. Reviewer should understand the tests thoroughly and then analyze their thoroughness – are all error paths reasonably tested? This approach provides an excellent way to spot bugs that you would very easily skip when just reading the actual implementation code.

Don’t make reviews your discussion board. Prefer short comments, debate face-to-face and then summarize the result in a short comment. Chatting through Crucible is inefficient, and while reviews can have considerable short-term documentary value, this value is not improved by including all discussion. Including the relevant results of a discussion provides the best balance.

Combine review methodologies to reduce pain. An author about to check in a batch of changes should consider ways to improve the review experience. Since diff-based reviewing is based on line-by-line change tracking, certain types of changes make diffs very hard to read. For example, renaming a member in project-wide use causes scattered diffs for hundreds, maybe thousands of code lines. Such a diff is very cumbersome to review properly (i.e. verify that nothing else has changed); it’s usually much better to do the rename together to eliminate the need for after-the-fact reviewing.

Don’t be afraid to take another spin. Every now and then, a review results in considerable code rewriting. This doesn’t mean the original code sucked; sometimes it’s just that the review reveals things that were not accounted for (a similar construct elsewhere in the project, a new requirement or whatever). If the code must significantly change because of the review, reviewers should be comfortable asking for another round (“make these changes and hit me with another review request”). Some may consider this unnecessary supervision and lack of trust for the code author, but they are the ones who misunderstand the key reasons behind reviews. Review until you have confidence in the code being fixed properly.

 

Up next: A dive into the backend technology, for a change.

May 4, 2011 · Jouni Heikniemi · 2 Comments
Tags: ,  Â· Posted in: Misc. programming

2 Responses

  1. Lauri Peltonen - May 5, 2011

    Thanks for the great post, Jouni!

    I look at all this from the "junior's" point of view and see, above all else, a great method to support learning. I think I've learnt most when talking with the more senior guys at work. But without a process like review it happens rarely, and often only when I see there's a problem and ask for their opinion (and not when there's a problem that I don't see!).

    And in general, there aren't too many practices that promote talking about code. We are (I am, at least) too often tied to our keyboards solving our own problems, while growing as a programmer best happens when we discuss and get feedback.

    I really hope next time I'm thrown in at the deep end there's something like that to help me swim!

  2. Jouni Heikniemi - May 20, 2011

    Well, Lauri, you make a good point.

    While we didn't have a distinct senior/junior setting in the project, we definitely did benefit from the learning perspective as well: One thing even seniors cannot generally know about is the set of conventions and tools available. Either reviewing code using those tools – or getting yours commented for lack of tool use – is a great way to learn them very quickly.

    As promised, I'll post later on our most common review findings, and this'll be very obvious then.

Leave a Reply