Yet Another Form of Code Review

Mark Dalrymple's Headshot
Mark Dalrymple

A couple of weeks ago Jay Hayes penned Code Review in Four Steps, about the way many teams at the Ranch approach code review.

One of the great things about working here (and we're hiring) is that teams are free to experiment, to do things in different ways and see how they work. I spend a lot of my time working with the Ranch's Products team helping to create end-user apps like eClicker and Roominant. Our team does ubiquitous code review using a pull-request-and-merge model for our day-to-day work.

How we do it

Each of our products has its own repository in github. The master branch is the current version of the software, with the idea being that we should be able to ship a release (or at least give a demo) at any time from the top of the tree. We use Pivotal Tracker to drive our development efforts. The workflow is:

  • Pick a story from Pivotal.

  • Create a branch off of master. We put the Pivotal story ID in the branch name for easier correlation later.

  • Work work work. (whistling optional)

  • Clean up the commit.

  • Submit a pull request (PR) back to master.

  • A teammate examines the PR, making comments if necessary, with some back and forth on what they found. Usually this would involve a couple of extra commits to address problems.

  • When they deem it worthy, the teammate merges the PR.

  • The branch is deleted.

It's a pretty basic git workflow. Rather than get into the details of what to type and what to click, I want to talk about why I like it so much.

Ubiquitous Code Review

I used to write Mac software for a large search company. We had a culture of ubiquitous code review, where we could not "land" (commit) any code without someone else's eyes on it. There were a couple of small exceptions, such as emergency fixes, and an experimental directory you could use for tests and throwaway code. An emergency fix was basically "I'm landing this now because systems are melting, and it will be reviewed sooner rather than later." There were automated tools that made sure that the change got reviewed, lest it get escalated higher up in the management chain.

I was really nervous when I started there, thinking, "How did I get hired? My code will suck. Day-to-day work will be nothing but horror. What have I gotten myself into?"

After my first dozen or two reviews, I came to love it. Legitimate problems in my code surfaced. I learned how to do things better. I found legitimate problems in the code of my more experienced and famous coworkers.

The nice thing about everything going through review is that there's nothing special about it anymore. It's just part of the workflow. Everybody does it (and has to do it), so your workflow takes review into account.

Obviously there could be delays in your code getting reviewed because other people are involved. If you're making structural changes to a subsystem, you will want the owner of that system to review your changes so you don't do something stupid. They may be busy right now. People take vacations, or perhaps live on the other side of the planet or up on the moon base (Oops, sorry. NDA.)

To avoid thumb-twiddling-downtime, I would usually have two or three different features (or bugs) in flight. One feature would be nearing completion and about to go into review, while another one had some work done on it but wasn't nearly as far along. I'd send out the review request once the first feature was ready and then concentrate on the second feature. Once my first feature was landed, I'd pick up a new one. This is essentially a pipeline of work. As with any pipelined architecture, you plan for the inevitable pipeline stalls. If I was totally blocked (which rarely actually happened), I'd go work on a side project, read some docs, or volunteer to take some reviews of anyone who was falling behind.

Review Etiquette

In retrospect, my fear was caused by some bad experiences I had with nitpicking code reviews on a contract gig. Their organization got a group of folks together in a stuffy meeting room, put the code up on a projector, and everyone there tore into it for an hour or two. That was demoralizing. Sometimes things turned nasty and personal, especially when internal power politics got involved. Not a good scene.

Code review in a healthy environment was a joy. Everyone involved realized that we all make mistakes. Because we were all simultaneously reviewees and reviewers, we tended to not get petty or hostile. It was a culture of mutual respect. Over time you'd learn the strengths and weaknesses of your coworkers. I was very good at finding race conditions in concurrent code, but I was generally really bad (that is, sloppy) with my memory and resource management. Concurrency-related reviews came my direction, I sent reviews that might have resource issues to someone I knew was awesome in finding my problems. Through this review process I picked up a lot of great little tactical techniques, as well as having my understanding of the language and environment challenged and expanded.

One incredibly cool thing is it allowed for collaboration and education over time and space. I worked on a Python side-project with a fellow Mac head. We wanted to do something real in Python, and we made an opportunity to fill a hole in our product line. We found a Python expert in Australia who wanted to use what we were making. The two of us locals would implement some features, write our tests, and review each other's code in terms of correctness for what the product needed. Then we'd toss the review over the ocean to the expert, who would take the time to wade through our C accents and tell us the idiomatic and efficient way to do things. A couple of rounds later we would have updated the code to make it more Pythonesque, and then we'd get the go-ahead to land it. Learning abounds.

Product Reviews

Most programming shops don't have an existing cultural commitment to code review, which can make it hard to adopt in an existing organization. Having soaked in a code review culture for nearly five years, I really missed it when I left and started solo projects. It makes me happy that the Ranch Products team has embraced it.

One nice side effect of our pull-request-and-review workflow is that the pull request history provides a good overview of the mutations to your product. They're a coarser granularity than raw commits so you're not lost in the details of small, targeted commits. For example, here's the last week or so's history of Roominant:

Roominant pr history

Just your usual mix of bug fixes, new features and adding Easter Eggs that you see with a maturing code base. The Product team has a number of engineers, and we switch gears on an as-needed basis to whatever product needs the attention. Having a granular history of changes is great for getting up to speed on what's happened in that code base since you last saw it two or three weeks ago. But that's just a happy side-effect of the whole code review process.

I believe that we (as an industry) are still figuring out how to do this whole software development thing. Code review allows multiple people with different perspectives to look at the new legacy code we're creating, and hopefully make it better.

Recent Comments

comments powered by Disqus