The Unreasonable Effectiveness of TDD

Bree Stanwyck's Headshot
Bree Stanwyck

In 1960, Eugene Wigner published a paper titled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." In it, Wigner discusses one of the thorniest and most fundamental questions in physics: Why does much of (apparently totally abstract) mathematics later end up applying so well to physics?

He states, "mathematical concepts turn up in entirely unexpected connections. Moreover, they often permit an unexpectedly close and accurate description of the phenomena in these connections." The paper drew a huge number of responses, up to and including the claim that this is the case because the universe itself is a Platonic mathematical object whose properties we are discovering over time.

I was reminded of all this by a talk shown at our weekly Lunch and Learn, "The Deep Synergy Between Testability and Good Design" from Michael Feathers (it even has a similar title!). Michael gives a few great, concrete examples of how "hard to test" implies "poorly designed." For example, he describes the common pain of "I wish I could test this private method" as a hint to extract another class from an "iceberg class" full of private logic.

Why TDD is unreasonably effective

Michael left partially open the question of why testing pains so frequently indicate design problems, or why TDD leads to better design, which got me thinking about "Unreasonable Effectiveness." I don't have any grand unified theories of code to propose, but the question is an interesting one. It seems obvious that tests prevent regression and help ensure the correctness of your software, but why should it improve design?

My answer is something like: writing tests forces you to use your code as though you were already maintaining it. It brings directly to the forefront design pains that might otherwise wait for weeks or months to appear, before the code is even pushed.

Testing also forces the programmer to act as a client of their own code, rather than as someone with intimate knowledge of its interior workings. It's easy, for example, for a class to accrue more and more direct dependencies on other classes over time, creeping into a god object that becomes a mess to maintain. Using the object when the setup has already been done by previous code can hide the problem. But unit testing a class with an enormous number of dependencies requires setting up (or at least stubbing out) every dependency, which quickly becomes a pain. So, writing tests in this case divorces the class from the context hidden by other code and brings those dependency problems to light.

Of course, you can "cheat" your way out of testing pains and still end up with badly-designed code. Michael Feathers demonstrates the cheat-y solution to his "iceberg class" example: just take the private methods you wish you could test, and make it public. Even then, testing can act as a barometer of code quality. In the case of the iceberg class, the unit spec will grow large and unwieldy from all the private logic that needs testing being stuffed into it.

All this seems (to me) to imply that "real" TDD, writing tests first, isn't strictly necessary for reaping those design benefits. Test-first coding just forces the client perspective. With no written code, the programmer is free to focus on the way code will ideally be used and maintained, rather than considering it guts-first.

Recent Comments

comments powered by Disqus