Monthly Archives: January 2011

The death of mocks?

There’s been a lot of healthy discussion happening on the interwebs recently (actually, a couple of weeks ago – it’s taken far too long for me to finish this post!) regarding the transition away from using mocks within unit tests. The principle cause for this transition, and the primary concern, is that when mocking dependencies to supply indirect inputs to our subject under test, or to observe its indirect outputs, we inadvertently may leak the implementation detail of our subject under test into our unit tests. Leaking implementation details is a bad thing, as it not only detracts from the interesting behaviour of under test, but it also raises the cost of design changes.

Where is the leak?

Jimmy Bogard highlighed the aforementioned issues with his example test for an OrderProcessor:

[Test]
public void Should_send_an_email_when_the_order_spec_matches()
{
    // Arrange
    var client = MockRepository.GenerateMock<ISmtpClient>();
    var spec = MockRepository.GenerateMock<IOrderSpec>();
    var order = new Order {Status = OrderStatus.New, Total = 500m};

    spec.Stub(x => x.IsMatch(order)).Return(true);

    var orderProcessor = new OrderProcessor(client, spec);

    // Act
    orderProcessor.PlaceOrder(order);

    // Assert
    client.AssertWasCalled(x => x.Send(null), opt => opt.IgnoreArguments());
    order.Status.ShouldEqual(OrderStatus.Submitted);
}
 

The example test exercises an OrderProcessor and asserts that when a large order is placed, a sales person is notified of the large order. In this example, the OrderProcessor takes dependencies on implementations of IOrderSpec and ISmtpClient. Mocks for these interfaces are setup in such a way that they provide indirect inputs to the subject under test (canned responses), and verify indirect outputs (asserting the methods were invoked). 

Since the behaviour (notification of large orders) can be satisfied by means other than consuming the IOrderSpec and ISmtpClient dependencies, coupling our unit test these details creates additional friction when altering the design of the order processor. The bottom line is that refactoring our subject under test shouldn’t break our tests because of implementation details that are unimportant to the behaviour under test.

One step forward – test focus

To avoid leaking implementation detail within our unit tests, tests should be focused towards one thing; verifying the observable behaviour. Context-Specification style testing, BDD, and driving tests top-down can be applied to focus our tests on the interesting system behaviour. Taking this approach into account for the OrderProcessor example, the core behaviour may be defined as “Notifying sales when large orders are placed”. Reflecting this in the name of the test may give us:

public void Should_notify_sales_when_the_a_large_order_is_placed()

This technique provides a test that is more focused towards the desired behaviour our the system, however the test name is just the first step – this focus must also be applied to the implementation of the test such that it isn’t coupled to less interesting implementation details.

So how do we execute the subject under test while keeping the test free of this extraneous detail?

Two steps back? Containers, Object Mother, Fixtures and Builders

Subsequent to his previously mentioned post, Jimmy alludes to using  an IOC container to provide “real implementations” of components (where these components within the same level of abstraction and do not cross a process boundary or seam). Jimmy also mentions using patterns such as an object mother, fixtures or the builders, along with “real implementations” to simply the creation of indirect inputs.

At this point, my code-spidey-sense started tingling, and I’m know I’m the only person who had this reaction:

Screen shot 2011-01-13 at 9.18.38 PM.png

My concern here is that we’re pinning blame on mocks for stifling the agility of our design, but I believe there are other factors in play. Replacing 20 lines of mocks for an uninteresting concern with 20 lines of data setup for an uninteresting concern  is obviously not the way forward, as it is doesn’t to address the problem.

For example, substituting a “real” order specification for a larger order in place of the mock specification does not overcome the fact that we may want to readdress the design such that we no longer require a specification at all.

I do agree however, with the premise that using components from the same level of abstraction (especially stable constructs such as value objects)  is a valid approach for unit testing, and does not go against the definition of a unit test.

A different direction

Experience has taught me to treat verbose test setup as smell that my subject under test has too many responsibilities. In these scenarios, breaking down the steps of the SUT into cohesive pieces often introduces better abstractions, and a cleaner design. Let’s take another look at the OrderProcessor example, and review its concerns.

We defined the core behaviour as “Notifying sales when large orders are placed”. Our concerns (as reflected by the current dependencies) are identifying large orders and notification. Interestingly enough, these concerns have two very different reasons for change; the definition of a large order very domain centred and may change for reasons like customer status or rates of inflation, whereas notification is flatly a reporting concern and may change alongside our UI requirements. It appears then, that our concerns may exist at different levels of abstraction.

Lets assume then that the OrderProcessor is a domain concept and is the aggregate root for the sales process. We’ll remove the notification concern from the equation for now, and handle that outside of the domain layer.

If we treated the identification of a large order as an interesting domain event (using an approach like the one discussed by Udi Dahan here or as discussed by Greg Young here), we may end up with a test like this:

[Test]

public void Should_identify_when_a_large_order_is_submitted()

{

    // Arrange

    var order = new Order {Total = 500m};

    var orderProcessor = new OrderProcessor();

 

    // Act

    orderProcessor.PlaceOrder(order);

 

    // Assert

    orderProcessor.UncommitedEvents

    .First()

    .ShouldBe<LargeOrderSubmitted>();

} 

Interestingly enough, as a side effect of this design, there is no mocking necessary at this level, and each component we interact with exists within the same level of abstraction (the domain layer). Since there is no additional logic in the order processor other than identifying large orders, we can remove the concept of an OrderSpecification completely.

To maintain the same behaviour in our system, we must ensure Sales are still notified of large orders. Since we have identified this to be a reporting concern, we perform this task outside of the domain layer, in response to the committed domain events:

public void Should_email_sales_when_large_order_is_submitted()

{

    // Arrange
    var client = MockRepository.GenerateMock<ISmtpClient>();
    var salesNotifier = new SalesNotifier(client);
 
    // Act
    salesNotifier.Handle(new LargeOrderSubmitted {OrderNumber = 123});
 
    // Assert
    client.AssertWasCalled(x => x.Send(null), opt => opt.IgnoreArguments());

}

Since we are crossing a process boundary by sending an email, it’s still beneficial to use a mock here (we don’t really want to spam sales with fake orders). Although the implementation of the SalesNotifier test is concerned with the implementation detail of consuming an ISmtpClient implementation, the cost of this decision is lower since our SalesNotifier performs a single responsibility. This cost is offset by the benefit that we do not cross a process boundary in our test implementation; arguably a price worth paying.

One last example

Interestingly enough, Jimmy Bogard has excellent example of separating  concerns across levels of abstraction. By identifying a common pattern in his Asp.Net MVC controller implementation, and separating the responsibilities of WHAT they were doing from HOW they were doing it, his resultant controller design is much simpler. Notice as well how tests for these controller actions no longer require dependencies to be mocked, as they are working within a single level of abstraction; directing traffic within an MVC pipeline…

Conclusion

Tests are great indicators of design problems with out codebase. Complex setup of tests often indicate missing abstractions and too many responsibilities in components of our codebase. When confronted with excessive test-setup, try to take a few steps back and identify the bigger design picture. Try to identify whether your test is exercising code at different abstraction levels. BDD style testing can help focus your tests back to the core behaviour of your system, and following SOLID principles can help alleviate testing friction. Mocks can be extremely useful tools for isolating behaviour, but aren’t always necessary, particularly when our design is loosely coupled.

Advertisements