Best and Worst Practises in Unit Testing
Ok, so I had been in the process of writing a blog to summarize my opinions on TDD etc, and this article popped up on my feed reader – it covers a good deal of the things I had wanted to discuss, and is summarized in a very well constructed way. So rather repeating everything, I’ll focus on a couple the issues discussed that I feel are important.
The point I’d like to emphasize in this article, that truly strikes a chord with me is:
a suite of bad unit tests is immensely painful: it doesn’t prove anything clearly, and can severely inhibit your ability to refactor or alter your code in any way
In fact, I can’t emphasize this point enough. A suite of bad tests inhibit change. Brittle unit tests, like brittle code, need to be refactored/replaced. I think the points made under the section “Name your unit tests clearly and consistently” I think are crucial here. Just today for instance, I came across the following tests methods in a class named DutyAssignmentChecks:
I was only looking at the test code so get a feel for the current behavior of the system; after all, tests should be executable specifications that define the behavior of our code, what better way to document the system! Looking at this “unit test” code however did not help one bit. Maintenance is hard when you don’t know what you’re maintaining. If I break these tests when modifying the original code base, what value do these tests provide? If I’m changing the expected behavior of the system, then breaking a test may be valid, but how am I to know this is the case, if I can’t deduce what the expected behavior under test is! The article suggests using the subject, the scenario, and the result in the test name to clarify the behavior under test. This pattern of specifying is the basis of what has evolved into the Behavior Driven Development technique coined by Dan North.
The style of text naming I prefer is as follows:
public class when_initializing_core_module : concerns
public void context()
//we’ll stub it…you know…just in case
var skynetController = stub<ISkynetMasterController>();
_core = new SkynetCoreModule(skynetController);
public void it_should_not_become_self_aware()
_core.should_not_have_received_the_call(x => x.InitializeAutonomousExecutionMode());
public void it_should_default_to_human_friendly_mode()
// more specifications under this same context
I think this kind of style of testing greatly improves the clarity and purpose of both the test, and the therefore subject under test.
The only point the article suggests that I’d take with a small pinch of salt is in the section “Unit testing is not about finding bugs”. I think the author slightly underplays the effectiveness of unit tests detecting regression bugs when you’re making any changes to the existing code. He does mention that unit testing can be effective in finding bugs when refactoring, which I completely agree with – it’s this factor that makes refactoring confidently possible. I’d also argue however, that it helps find bugs introduced when a developer is extending the code base without following the Open/Closed Principal. In this scenario, the developer is modifying and existing code base to add a new feature or function. Although this scenario is undesirable (see the link to O/C P for details), and I would rather see the developers trained in good OO practice, in my experience I would say this scenario is extremely common in teams without much experience in good OO development. Under these conditions, unit tests are still quiet valuable in finding regression bugs.
What do you think?