Red, Green, Refactor – The Tools For Success
It’s easy to say “We’re agile” and “We use Behavior/Test Driven Development” and thus “we use the right tools to empower our developers!” but what are those tools? For me that discussion is entirely about the tool stack you choose, how that stack empowers you as a developer to do things right the first time. Luckily thanks to the ruby community as a whole we have a large number of high-quality choice to choose between.
Generally when we talk about TDD and being agile, beyond the process of choosing features to deliver the maximum customer facing value, we refer back to the simplest possible workflow of “Red/Green/Refactor.” Within that workflow you write tests that fail, make them pass (with the focus on getting that done without polishing), then refactoring, to polish where necessary. The subject of testing is a huge one with a large body of prose already written on the subject. Rails Test Prescriptions and The RSpec Book are fine examples of such, along with a huge menagerie of blog posts, podcasts, and screencasts to choose from.
I propose a slightly different day-to-day workflow that I see used to great success.
- Hack around and choose your course
- Red – write some tests that fail but will pass when your code meets your needs
- Green – the code is written, the tests pass, life is good.
- Refactor – Don’t Repeat Yourself (DRY your code out). Or don’t, if Ya Ain’t Gonna Need It (YAGNI).
- Continuous Integration
- PANIC OVER SECURITY
- Ride Bikes
This isn’t the “pure” TDD process, but one that just adds a couple of unspoken facets of the process that are already used widely, just don’t fall into the pithy “Red/Green/Refactor” process.
Ruby has a functional programming feeling with its composition of methods, distinction of methods with side-effects from those that do not (via the ! suffix), and it comes with some variety of REPL via IRB (or the new awesomeness in Pry). This allows us as developers to spend some time touching code and experimenting before we decide exactly how we might like a function to operate, and thus before getting into the main development workflow for a given feature.
Tools to support step 0:
- Irb – you already know about this but it’s easy to overlook.
- Pry – the new slickness that is 1-part Irb replacement, 1-part debugger, and a pinch (or three) of awesome. Watch the screencast to get a feel for it!
- good-ole-pencil-and-paper, a whiteboard, some origami paper
- whatever tool gives you a feeling for the right way to proceed without being trapped in analysis-paralysis.
Once you’re ready to write some tests that will fail, you likely already have a test framework of choice, and even a workflow for running those tests. You may run them from within your editor, have a terminal window ready, willing, and able to run them with a single command (and a shortcut into your history like ctrl-R), or you may let something else automagically run them for you and report back.
At Highgroove many of us find a combination of running tests manually and automatic test running and reporting to be most useful. To accomplish this I use the testing framework selected for the project along with Spork and Guard. Spork is a test server that constantly keeps your rails code loaded and ready to be tested rather than loading each time in a rake task. Spork should be your first step to speeding up slow-running test-suites. Guard is Spork’s right hand man. It watches code for changes on disk and automagically runs the correct associated tests. Guard will even notify you via Growl, Libnotify, or your other notification framework of choice that you’ve broken things, or that everything is still hunky-dorey and you should keep hacking away and/or ship!
When I run tests manually I often want to run a single test, or set of tests manually. Using Rspec2 you can accomplish this with tags or calling the specific spec by line number via
bundle exec rspec <path to spec>:<line no. of spec to run>.
Testing is not only a matter of making your tests all turn green. Poorly written tests can certainly be green and still not indicate that your code is working as intended. Perhaps you simply forgot to include any assertions (I’ve certainly never done that) or you’re testing your factories rather than your models (never ever once again). Code quality tools can help you with that as well, and it’s not entirely certain as to whether they fit in the “Red/Green” phase of development, or in the “Refactor” phase. I like to split the playing field and say that code-coverage tools belong in the red/green phase as they allow you to check the quality of your tests which firmly live in “red/green” while other code quality tools (such as static analysis tools for cyclomatic complexity) belong in the “Refactor” phase. We’ve discussed code-coverage in Ruby before so have a look there for a refresher, but if you’re not using one of these tools it may be time to roll it into your workflow. Cover-me even has a nice post-test HTML coverage report to show you how well you’ve done. This allows a very tight development loop to make sure the tests are covering the right things the first time.
Tools to support steps 1 and 2:
- Testing frameworks: MiniTest, Test::Unit, Rspec, Cucumber, etc.
- Spork and Guard
- Coverage tools: cover_me, SimpleCov, rcov (for Ruby 1.8 apps)
So you’ve got good tests, good test coverage, everything is green, and you’re ready to refactor. If you are struggling to start your refactoring process then either it’s good enough as it is or maybe you need some inspiration. Sometimes your code just doesn’t need to be refactored. It is easy to fall into the premature optimization problem and abstract/refactor/clean your way into a corner that you will undoubtedly undo later when the customer asks for more or different flexibility. So maybe just delay refactoring if there isn’t an obvious need. In the case of needing some hints as to where to refactor the Rails Best Practices site has you covered.
There is no replacement for a good set of eyes on the code and intuition as to where you can refactor. To this end Highgroove’s internal code reviews go a long way to getting our code refactored quickly and elegantly.
Since we are not constantly reviewing or pairing there are some tools to help us along including the excellent Flog and Flay, a pair of tools to find complex code and duplicate code, along with an interestingly macabre theme. Another tool is Excellent which produces warnings about “smelly code” via static analysis. Finally we have the big kahuna: Rails Code QA. This package wraps up other code quality packages (including Flog and Flay) to make a single rake task that does it all.
Tools to support step 3:
At this point you have your local code automatically being tested, you’ve run the tests you need to focus on, you’ve refactored with the help of tools, the rest of your development team, and the community’s collective knowledge. If you are working on a feature branch, that branch needs to be merged into a staging or production branch, and hopefully tested before being pushed to client or customer facing environments. This is where Continuous Integration (CI) comes in.
A CI environment is constantly running all of your tests on whatever branches it needs to in order to make sure that any merge artifacts don’t make their way into production. You may choose to run your own CI environment via Jenkins or Goldberg, both of which are lightweight and well supported, or you may choose to “outsource” your CI to a service. Travis-CI has gotten a large following in the Ruby community thanks to its super simple setup, integration with Github, and quality service. I was recently introduced to Tddium by a co-worker while struggling to setup an internal CI system for a customer. It looks very well backed and supports pretty much all of the services we employ day to day. If you don’t want to do operations to make sure your CI is up and running then looking to one of the latter services is a great route.
Tools to support step 4:
Great! Your code is tested locally and in CI, everything looks solid in terms of functionality, and Rails automagically does everything right for security. It’s time to go ride bikes right? Well maybe not. There is a comprehensive and excellent Rails Security Guide because it’s not so simple. This isn’t to say that Rails security is as hard as it could be, but as with everything getting it right requires knowing and following the best practices created by the community.
Luckily for us the Brakeman Scanner project integrates beautifully with the tight testing workflow described above. Brakeman is a static-analysis based security-focussed scanner aimed at Rails applications. It is under active development and the progress has been phenomenal. We did a brief tech-talk about it recently and to say we are enamoured of it may be an understatement. Even a couple of the bugs described in the tech-talk have been fixed since then! Combining Brakeman’s fantastic scanner and our automated testing tools works out of the box in two ways. Firstly there’s Brakeman’s reporting which is excellent and could certainly be wrapped in a rake task to create a cover_me style report after each test run. Secondly there is guard-brakeman for local-scanning and testing as well as notifications via your system notifier when new security issues are introduced, just as if a test failed. For CI environments Brakeman works with Jenkins out of the box and adding support to other CI environments should be a fairly simple operation.
Tools to support step 5:
Making your development and testing loop as tight as possible allows you to have confidence that your code has been written in a process that promotes testing, coverage, and automated quality tests so that you can focus on what’s important: shipping code and riding bikes.