Monthly Archives: December 2009

Magical Code

It’s becoming increasingly apparent to me that some of the code I’ve been writing (perhaps even the majority?!) may look “magical” at first glance, or to the uninitiated.

I briefly touched upon this in my last blog post, however, I don’t feel I’ve explored deeply enough into the extent of the problem, or into possible solutions in which I can improve the “maintainability” cost to my code.

This post is written in part to explore this problem further, and to get some feedback into ways to solve this issue.

Causes of Magical Code

Loosely coupled systems can be a pain to track down where the coupling occurs

Although I don’t fully agree with the statement itself, I can certainly appreciate its intent in the context in which it was made. Let’s have a look at some of the factors that I think may have formed the basis in which this opinion was founded.

  • Application of advanced .NET features that many developers are unfamiliar with, including, but not limited to:
    • Lambda syntax
    • Anonymous Types
    • Reflection
    • Delegates
    • ASP.NET MVC
      • Specialized controller factory, Specialized Model Binders, and template helpers (input and display builders)
  • Concoction of OSS tools, including, but not limited to:
    • NHibernate, AutoMapper, StructureMap, MSpec
  • Usage of Design Patterns/techniques that some developers may be unfamiliar with, and perhaps that even more experienced developers may have trouble recognising. Including, but not limited to:
    • MVC
    • Inversion of Control
    • Templated Method, Decorator, Nested Closure, Memento, Specification, Event Sourcing , Command Pattern, etc
  • Certain scenarios where IDE tools (we use ReSharper) suggest classes and methods are not instantiated or are unused, or that the sole usage is the unit tests. In actuality, the classes/methods are used through some kind of reflection or created by and inversion of control container. This causes some confusion when identifying points of coupling, and locating “dead code”.
  • Application of conventions for eliminating repetitive code
  • Context/Specification style unit tests using MSpec syntax

Looking for Feedback

I’ve listed some issues that I am aware of that I think may result in some difficulty in the understanding of my code base. If I get some time soon, I would like to post again with some potential approaches to tackling these issues. In the mean time, have you come across similar problems with your code base, or have felt similar friction when pairing? Have you any items to add to this list of issues? Perhaps you have suggestions on how you’ve talked these issues in the past, or have an idea to add?

Let me know your thoughts!

The cost of applying Convention over Configuration

This post is in response to an email from a colleague of mine who (quite appropriately) has some reservations to applying Convention over Configuration. For those of you unfamiliar with this approach, or wanting to learn more, I would invite you to read Jeremy Millers article on this topic on MSDN. Over this post, I will frequently paraphrase and quote content from Jimmy Bogards blog, since many of my opinions have already been said by him, in a more coherent way that I ever could!

Reservations

Quite rightly, my colleague Chris has pointed out a cost associated with Convention over Configuration:

My trouble with conventions is that they give you performance enhancements [in regards to efficiency] in the long-term, but can confuse the be-jesus out of people when they first look at them.  E.g. Loosely coupled systems can be a pain to track down where the coupling occurs, unless people are adhering to some convention regarding their setup.

At first glance (unaware of the conventions), it is easy to be put off by the inherent “magic” of the convention over configuration approach. Additionally, embedding “opinions” into your code or framework may detract from potential reuse in new scenarios where those opinions aren’t as favourable. There is also the complexity involved with enforcing conventions, although static code analysis (as part of continuous integration) can go some way of relieving this.

Multiple Versions of the Truth

Ignoring conventions by iterating endlessly, never retrofitting opinions results in having many versions of “truth” in our system, all of them correct at one point in time, none standing out above the other.

Opinionated (convention-driven) software development tends to be more efficient however, removing unnecessary decisions from the developers and promoting consistency. The result of this is that we often write need to write less code to achieve the desired result. Of course writing less code is good by me!

Application of Conventions

It is important to remember that conventions should be applied by turning already implicit concepts, into explicit concepts. Trying to form opinions in absence of any context is very likely to lead to awkward, friction-inducing development. In this case, YAGNI should prevail.

Opinions formed in one application are frequently only appropriate to that specific application. It is therefore appropriate that opinions are formed within the team involved, with new members introduced to these concepts when brought onboard.

Final Thoughts

In summary, I’d like to refer back to this comment from Jimmy Bogard, stated in regard to forming conventions:

The middle ground here is one where we become finely attuned to the pain induced by our design, [do] not try to invent problems where they don’t exist, iterate our design, and retrofit after each breakthrough.  Opinionated software is a fantastic concept, but we can’t confuse opinion formation with misguided attempts to make all design decisions upfront in the absence of agreeing upon the principles that led to the opinions.