Processing ModelState errors returned in Json format using Knockout.js

As part of my quest to become a better JavaScript developer I’ve been experimenting with a really neat little JavaScript library called Knockout. For those unfamiliar with Knockout, here’s a very quick overview:

Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model

The rest of this post will pretty much be a brain dump of some of my recent experiments with the Knockout JavaScript library. We’ll take a little look at some typical procedural style usage of JavaScript, and then show how this can be cleaned up using a more declarative style using Knockout.

Returning Model State as Json

To set the scene, I have a small form in the application I’m working on where some complex server-side validation takes place. When this validation takes place, any broken validation rules are transformed into Json and returned to the client.

The implementation that processed the subsequent json result (based on this post) looked a little like this:

Source Code: View

<div id="operationMessage"><ul></ul></div>

Source Code: Scripts

<script type="text/javascript">

   function ProcessResult(result) {

     $("#operationMessage > ul").empty();

      if (!result.Errors) return true;


      for (var err in result.Errors) {

       var errorMessage = result.Errors[err];

       var message = errorMessage;

       $(‘#operationMessage > ul’).append(‘<li> ‘ + message + ‘</li>’);


       return false;          



The function ProcessResult is called subsequent to receiving the response from a server side call using $.ajax(). The result object contains our list of model state errors in json format.

As you can see, the code takes responsibility for:

  1. Ensuring the UL element of #operationMessage is empty (to ensure only the messages from the current result are displayed)
  2. Adding a css class “error” if there are error messages to display
  3. Pulling out each part of the message, and appending it to the DOM

There are a couple of things worth noting here:

  • The function needs to know about the mark-up of the page, and the mark-up to use to display messages
  • The mark-up in the page needs “hooks” in the form of the id, such that the function can locate where to add messages

In essence, the “what” and the “how” of displaying error messages is all contained in one function; if either of these things are to change we risk impacting both.

Let’s see if we can do better.

View Models

The first step in separating out the concerns of our function, is to define the “what” part – what are trying to display? Let’s make that explicit using a view model. Actually, this is pretty simple for this example; we want to display a list of messages. In traditional JS, this is just an array – but since we’ll be hooking into a little Knockout goodness, we’ll use an observableArray:

var viewModel = {

    errors: ko.observableArray([])


The observable part is a knockout feature that lets the UI observe and respond to changes. To populate this model, we’ll simply call viewModel.errors(result.Errors) in place of calling ProcessResult.

JQuery Templates

Now we’ve defined our model, we’ll bind to this model using a jquery template. This will form the View, or the “How” part – defining how we want to display our model. First, lets define an element for our template:

<div data-bind=’template: "validationSummaryTemplate"></div>

And then the template itself:

<script type="text/html" id="validationSummaryTemplate, css: { error: errors().length > 0">


     {{each(i, error) errors}}







Pretty concise – we’re simply defining the structure of our page in terms of our view model i.e. for each error in the view model, we’ll render a LI tag containing the error message.

Worth noting is the css-binding – remember that we want to add the error class if there are messages to display – that’s how we do it. We could push this logic onto a property of our view model if we like “hasErrors” but since this isn’t complex, or re-used elsewhere in the view, lets keep it here for now.

To apply the binding, we’ll need one final thing. We need to tell knockout to take effect:


Quick Roundup

What have we gained? The “what” and the “how” are now cleanly separated, and the procedural “processing” JavaScript is completely removed. We also no longer need to dig into a JavaScript method if we wish to change the mark-up for our error items (say, to render a table instead of a list).

When writing JavaScript (much like any other language), it’s all too easy to end up with large swathes of procedural code if we aren’t careful in keeping responsibilities focused and separate.  The primary problem that face us with procedural code comes when we try to scale out complexity.

I’ve found Knockout to be a really good enabler for applying patterns such as MVVM, which in turn helps us to keep complexity at bay.

The death of mocks?

There’s been a lot of healthy discussion happening on the interwebs recently (actually, a couple of weeks ago – it’s taken far too long for me to finish this post!) regarding the transition away from using mocks within unit tests. The principle cause for this transition, and the primary concern, is that when mocking dependencies to supply indirect inputs to our subject under test, or to observe its indirect outputs, we inadvertently may leak the implementation detail of our subject under test into our unit tests. Leaking implementation details is a bad thing, as it not only detracts from the interesting behaviour of under test, but it also raises the cost of design changes.

Where is the leak?

Jimmy Bogard highlighed the aforementioned issues with his example test for an OrderProcessor:

public void Should_send_an_email_when_the_order_spec_matches()
    // Arrange
    var client = MockRepository.GenerateMock<ISmtpClient>();
    var spec = MockRepository.GenerateMock<IOrderSpec>();
    var order = new Order {Status = OrderStatus.New, Total = 500m};

    spec.Stub(x => x.IsMatch(order)).Return(true);

    var orderProcessor = new OrderProcessor(client, spec);

    // Act

    // Assert
    client.AssertWasCalled(x => x.Send(null), opt => opt.IgnoreArguments());

The example test exercises an OrderProcessor and asserts that when a large order is placed, a sales person is notified of the large order. In this example, the OrderProcessor takes dependencies on implementations of IOrderSpec and ISmtpClient. Mocks for these interfaces are setup in such a way that they provide indirect inputs to the subject under test (canned responses), and verify indirect outputs (asserting the methods were invoked). 

Since the behaviour (notification of large orders) can be satisfied by means other than consuming the IOrderSpec and ISmtpClient dependencies, coupling our unit test these details creates additional friction when altering the design of the order processor. The bottom line is that refactoring our subject under test shouldn’t break our tests because of implementation details that are unimportant to the behaviour under test.

One step forward – test focus

To avoid leaking implementation detail within our unit tests, tests should be focused towards one thing; verifying the observable behaviour. Context-Specification style testing, BDD, and driving tests top-down can be applied to focus our tests on the interesting system behaviour. Taking this approach into account for the OrderProcessor example, the core behaviour may be defined as “Notifying sales when large orders are placed”. Reflecting this in the name of the test may give us:

public void Should_notify_sales_when_the_a_large_order_is_placed()

This technique provides a test that is more focused towards the desired behaviour our the system, however the test name is just the first step – this focus must also be applied to the implementation of the test such that it isn’t coupled to less interesting implementation details.

So how do we execute the subject under test while keeping the test free of this extraneous detail?

Two steps back? Containers, Object Mother, Fixtures and Builders

Subsequent to his previously mentioned post, Jimmy alludes to using  an IOC container to provide “real implementations” of components (where these components within the same level of abstraction and do not cross a process boundary or seam). Jimmy also mentions using patterns such as an object mother, fixtures or the builders, along with “real implementations” to simply the creation of indirect inputs.

At this point, my code-spidey-sense started tingling, and I’m know I’m the only person who had this reaction:

Screen shot 2011-01-13 at 9.18.38 PM.png

My concern here is that we’re pinning blame on mocks for stifling the agility of our design, but I believe there are other factors in play. Replacing 20 lines of mocks for an uninteresting concern with 20 lines of data setup for an uninteresting concern  is obviously not the way forward, as it is doesn’t to address the problem.

For example, substituting a “real” order specification for a larger order in place of the mock specification does not overcome the fact that we may want to readdress the design such that we no longer require a specification at all.

I do agree however, with the premise that using components from the same level of abstraction (especially stable constructs such as value objects)  is a valid approach for unit testing, and does not go against the definition of a unit test.

A different direction

Experience has taught me to treat verbose test setup as smell that my subject under test has too many responsibilities. In these scenarios, breaking down the steps of the SUT into cohesive pieces often introduces better abstractions, and a cleaner design. Let’s take another look at the OrderProcessor example, and review its concerns.

We defined the core behaviour as “Notifying sales when large orders are placed”. Our concerns (as reflected by the current dependencies) are identifying large orders and notification. Interestingly enough, these concerns have two very different reasons for change; the definition of a large order very domain centred and may change for reasons like customer status or rates of inflation, whereas notification is flatly a reporting concern and may change alongside our UI requirements. It appears then, that our concerns may exist at different levels of abstraction.

Lets assume then that the OrderProcessor is a domain concept and is the aggregate root for the sales process. We’ll remove the notification concern from the equation for now, and handle that outside of the domain layer.

If we treated the identification of a large order as an interesting domain event (using an approach like the one discussed by Udi Dahan here or as discussed by Greg Young here), we may end up with a test like this:


public void Should_identify_when_a_large_order_is_submitted()


    // Arrange

    var order = new Order {Total = 500m};

    var orderProcessor = new OrderProcessor();


    // Act



    // Assert





Interestingly enough, as a side effect of this design, there is no mocking necessary at this level, and each component we interact with exists within the same level of abstraction (the domain layer). Since there is no additional logic in the order processor other than identifying large orders, we can remove the concept of an OrderSpecification completely.

To maintain the same behaviour in our system, we must ensure Sales are still notified of large orders. Since we have identified this to be a reporting concern, we perform this task outside of the domain layer, in response to the committed domain events:

public void Should_email_sales_when_large_order_is_submitted()


    // Arrange
    var client = MockRepository.GenerateMock<ISmtpClient>();
    var salesNotifier = new SalesNotifier(client);
    // Act
    salesNotifier.Handle(new LargeOrderSubmitted {OrderNumber = 123});
    // Assert
    client.AssertWasCalled(x => x.Send(null), opt => opt.IgnoreArguments());


Since we are crossing a process boundary by sending an email, it’s still beneficial to use a mock here (we don’t really want to spam sales with fake orders). Although the implementation of the SalesNotifier test is concerned with the implementation detail of consuming an ISmtpClient implementation, the cost of this decision is lower since our SalesNotifier performs a single responsibility. This cost is offset by the benefit that we do not cross a process boundary in our test implementation; arguably a price worth paying.

One last example

Interestingly enough, Jimmy Bogard has excellent example of separating  concerns across levels of abstraction. By identifying a common pattern in his Asp.Net MVC controller implementation, and separating the responsibilities of WHAT they were doing from HOW they were doing it, his resultant controller design is much simpler. Notice as well how tests for these controller actions no longer require dependencies to be mocked, as they are working within a single level of abstraction; directing traffic within an MVC pipeline…


Tests are great indicators of design problems with out codebase. Complex setup of tests often indicate missing abstractions and too many responsibilities in components of our codebase. When confronted with excessive test-setup, try to take a few steps back and identify the bigger design picture. Try to identify whether your test is exercising code at different abstraction levels. BDD style testing can help focus your tests back to the core behaviour of your system, and following SOLID principles can help alleviate testing friction. Mocks can be extremely useful tools for isolating behaviour, but aren’t always necessary, particularly when our design is loosely coupled.

Asynchronous MVC using the Task Parallel Library

I’m not going to go into any detail in this post as to why asynchronous actions may be beneficial to your application; other people have covered that in more detail than I’d care to go into. This post will however, try to show how implementing Async controllers can be made simpler, using the Task Parallel Library in .NET 4.0 and a little bit of MVC trickery.

Before I go too far, I’d like to point you to have a look at an introduction in using the TPL with MVC from Mike Hadlow. Mike’s post shows how you can use the TPL to simplify the consumption of Asynchronous services, from an Asp.Net Mvc AsyncController.

In Mike’s example, he ends up with the following AsyncController implementation:

public class HomeController : AsyncController


    readonly UserService userService = new UserService();



    public void IndexAsync()



        userService.GetCurrentUser().ContinueWith(t1 =>


            var user = t1.Result;

            userService.SendUserAMessage(user, "Hi From the MVC TPL experiment").ContinueWith(t2 =>


                AsyncManager.Parameters["user"] = user;







    public ViewResult IndexCompleted(User user)


        return View(user);




As you can see, we unfortunately have to abandon the succinctness of the TPL syntax, and revert back to the Asynchronous Programming Model pattern of having a start method with a completion callback.

Mike points out that it would be rather nice if instead, we could write a controller action that returns a Task result instead – our resultant action would be nice and simple:

public Task<ViewResult> Index()
    return from user in userService.GetCurrentUser()
           from _ in userService.SendUserAMessage(user, "Hi From the MVC TPL experiment")
           select View(user);

Even though this will compile (when using the TPL extension extras library), unfortunately, this will not run, as the default controller action invoker for running async actions (AsyncControllerActionInvoker) does not know how to handle Tasks…

Asp.Net Mvc Futures

The mvc futures project currently includes some more flexibility in its support for async patterns. Rather than being limited to supporting the “Async –> Completed” action pairs, the futures project contains the following options:


  • The Async Pattern (BeginFoo/EndFoo)
  • The event pattern (Foo/FooCompleted)
  • The Delegate Pattern (returning a Func<> that represents a continuation).

The most interesting of these techniques (to this example) is the Delegate Pattern:

public Func<int> Foo(int id)
    return () => id * 2;

Using this approach, a controller can specify a delegate to provide a completion callback. It’s not too much of a leap to see the similarities with this, and the desired technique using Tasks.

So how does the Asp.Net Futures project add these additional features?  Perhaps we can extend them to support TPL?

Hooking in to the Action Invoker

To support the additional async patterns using the futures project, a Controller must inherit the AsyncController from the futures assembly. This controller overrides a property from the base controller class in order to specify a new Action Invoker that can correctly identify actions that represent asynchronous methods. To identify these actions, the AsyncActionInvoker delegates to a AsyncActionMethodSelector that will reflect over the methods on a controller, and pick our asynchronous actions based on naming conventions, or the return type in the case of the delegate pattern. This seems like a good place to start looking to add our new feature.

Unfortunately for us, the AsyncActionMethodSelector does not delegate out to individual objects to identify and handle each type of asynchronous pattern, so supporting an additional pattern will involve some changes to this class. Following the Open/Closed principle could really have cleared up the design here…

Anyway, once we have extended to the AsyncActionMethodSelector to support our new pattern, we need to hook this back into the AsyncActionInvoker (which again, requires some code changes to this class – seriously, this code could’ve been much simpler if people followed SOLID principles!), and then we can use this new invoker from our Async Controller.

Using this new invoker, our controller can now support the use of Tasks for Asynchronous Actions!


As with other areas of the MVC framework, extending the framework to add new features like supporting Tasks to represent Asynchronous Controller Actions was a little bit more painful than it could have been, but in the end, it’s worth it to clean up the the required code to support asynchronous controller actions.

To check out the implementation of all this, you can get the full sample source from GitHub:

Script Management in ASP.Net MVC

In all but the simplest of modern web applications, it’s not uncommon to find that our application depends on many web assets such as CSS and JavaScript files. While it’s generally good practice for maintainability to keep these files separated into small logical chunks, unfortunately practice has implications on the overall performance of our application when referencing these resources; an additional HTTP request will be made for each asset, in turn adding to the latency of the initial load of our application.

Whilst many developers are probably aware of various optimization techniques such as those proposed in the YSlow recommendations (like script combining, minification, caching, compression), implementing these techniques can be a bit of burden.

Further still, when we utilize the Master and Content page features of our view engine, we will often want to place core JavaScript (such as the JQuery library) and common component initialization in the Master Page, whilst the Content pages may have their own file sets and initialization scripts.

This set-up adds an additional complexity to managing our scripts; we need to ensure the scripts tags are all rendered first, regardless of where they are located in the page (Master Page/Content Page/User Control) or our application will start throwing JavaScript exceptions all over the place!

Help is at hand!

Several projects exist that attempt to address some, or all of the issues I’ve mentioned (page request optimization and asset management) – lets take a look at some of our options:

Telerik ScriptRegistrar

Probably one of the closest matches to our requirements, the Telerik ScriptRegistrar and its partner in crime the StylesheetRegistrar provide a simple mechanism to provide a range of optimizations to improve the performance of our applications.  The ScriptRegistrar API is composed of a fluent interface allowing groups of scripts to be rendered at the bottom of our master page, whilst additional script dependencies and initialization can be registered in content pages. One small gotcha to look out for here is that partial views returned from Ajax requests will not have the scripts registered.


  • Resource combination (served by a IHttpHandler)
  • Grouping
  • Compression
  • Caching
  • Support for Content Delivery Networks (CDN)
  • Nice Fluent Interface for script registration and initialization
  • A Commercial License is required if you are building closed-source commercial products for redistribution (GPLv2 license).
  • Extra care is required for scripts registered in partial views rendered by Ajax calls (since these are not included by the ScriptRegistrar).
  • Minification of your scripts is not provided (although some support for selecting between pre-minified scripts and un-minified scripts in debug mode is provided)
  • Cache headers and ETags are not generated

Include Combiner

The Include Combiner project, which is included in the MvcContrib* solution as MvcContrib.IncludeHandling, tackles many of the aforementioned issues surrounding the optimization of asset management. It is not as complete as the Telerik implementation, and also suffers from the same complexity in registering scripts in partial views returned by ajax, however this tool is still worth a look for Asp.Net Mvc applications, especially for projects already consuming MvcContrib.


  • Resource combination (served by a custom controller)
  • Grouping
    • Resources grouped by page usage (rather than explicit sets) – scripts shared across pages must be re-downloaded for each page.
  • Compression
  • Minifies scripts
  • Generates cache headers and ETags
  • Simple interface for including/outputting resources
  • Included as part of MvcContrib
  • No server side caching – content is fetched every request
  • Treats JS and CSS separately, and therefore causes a minimum of 2 additional requests per page.
  • Extra care is required for scripts registered in partial views rendered by Ajax calls (since these are not included by the ScriptRegistrar).


Possibly the most complete libraries in terms of features, Combres is a strong choice for managing assets in your application. Combres requires that you define named resources sets which can then be combined, minified, compressed and sent to the browser as a single request. Interestingly, combres has an interesting extensibility model allowing developers to provide additional features, such as applying .less rules and replacing relative urls with absolute urls in css files. On the downside, the configuration model for Combres is very XML heavy, and not as nice to consume as the fluent API provided by other libraries. Unfortunately, the underlying implementation seems to wrap around the XML configuration too, so extending the library for easier consumption (applying your own conventions etc) might be tricky, especially when considered in conjunction with the fact that there are NO unit tests included in the source code (available on CodePlex).


  • Resource combination (served by a IHttpHandler and RouteHandler)
  • Grouping (Resource sets)
  • Compression (gzip or deflate depending on browser)
  • Minifies scripts (YUI compressor, MS Ajax or Google Closure)
  • Generates cache headers and ETags
  • Server side caching
  • Integrated with Asp.Net routing engine, and therefore also supports webforms development in addition to MVC
  • Extensible filtering support
  • XML heavy asset configuration model
  • Requires resource configuration upfront – as far as I can tell, there isn’t an easy way for a view component to quietly declare “I need this JS/CSS” and for it to be included if it isn’t already there


Optimizing our web applications and conforming to good practices like caching, combining and compressing our resources needn’t be a burden for developers. There are some great tools available to help us keep our applications nimble, whilst ensuring we aren’t distracted from delivering our customers value with technical implementation concerns. I’ve listed our some of the tools I’ve come across that bring us closer to achieving this goal.

Is there anything I’ve missed? Do you have any tools you use to optimize asset management? If so, let me know so I can check them out too!


*MvcContrib is a project designed to add functionality to and ease the application of Microsofts Asp.Net Mvc framework and really useful for developers looking to develop and test UI elements on top of MS Asp.Net Mvc. Check it out here.

Uniqueness validation in CQRS Architecture

Note: This post is copied almost verbatim from a comment I left on Jérémie Chassaing’s blog, my apologies if you’ve seen it there already!

I’ve really enjoyed reading a series of posts on CQRS written by Jérémie Chassaing. One  I particularly like the idea that there is no such thing as global scope:

Even when we say, “Employee should have different user names”, there is a implicit scope, the Company.

What this gives us is the ability to identify potential Aggregate Roots in a domain – in the above relationship, there is potentially a Company Aggregate Root in play.

Another observation Jérémie’s post really got me thinking.

Instead of having a UserName property on the Employee entity, why not have a UserNames key/value collection on the Company that will give the Employee for a given user name ?

If I’ve understood Udi’s posts on CQRS, I think he’d probably advocate the collection of Usernames being part of the Query-side, rather than the Command side. I’ve heard him mention before that the query side is often used to facilitate the process of choosing a unique username – the query store may check the username as the user is filling in the "new user" form, identifying that a username already exists and suggesting alternatives.

Of course this approach isn’t bullet-proof, and it will still remain the responsibility of another component to handle the enforcing of the constraint.

The choice of WHERE to put this logic is a question that is commonly debated.

Some argue that since uniqueness of usernames is required for technical reasons (identifying a specific user) rather than for business reasons, this logic falls outside of the domain to handle.

Others may argue that this logic should fall in the domain – perhaps under a bounded context responsible managing user accounts.

In either case, since we have a technical problem (concurrency conflicts) and  we have several possible solutions, the decision of whether on not they are suitable should probably constrained in conjunction with the expected frequency of the problem occurring. This sounds to me like the kind of thing that would appear in a SLA.

The solution chosen to enforce the uniqueness constraint will then depend on the agreed SLA. Perhaps it is acceptable that a command may fail (perhaps due to the RDBMS rejecting the change) on the few cases of concurrency conflicts – it might only be on a 0.0001% of cases.

Alternatively we may decide that it is unacceptable to allow this to occur due to the frequency of this occurring. We could choose to maintain the list of usernames in the Company aggregate, but scale out our system such that all "new user" requests in the username range A-D are handled by a specific server. If we decide to enforce this constraint outside of our domain, we can offload this work to occur with the command handlers.

What do you think?

Domain Modelling and CQRS

Note: This post originated from an email discussion I had with a colleague. I’ve removed/replaced the specifics of our core domain (hopefully this hasn’t diluted the points I’m trying to make too much!)

While I consider that a focus on capturing intent to be an exciting part of CQRS from a core software design perspective, I believe it is achieving the distillation part of DDD, separating our core domain from supporting domains, that allows us to maximise our potential ROI from the application of CQRS (with event sourcing).

From a business stance, we have chosen as a company to focus on a core domain to differentiate us from our competitors. Our management/marketing team have decided that this area of business provides our advantage over competitor products. From that perspective, it makes sense that we channel our efforts into ensuring that our software model is optimised for this purpose. As a result, we need to spend less effort working on our supporting domains and more effort on our core domain.

I would therefore argue that we should not apply the same level of analysis and design on our supporting domains, as we do on our core model – these areas provide little ROI by comparison.

Whilst I agree that moving away from the CRUD mentality is vital in our core domain, it is not so essential in supporting domains. The level of complexity in our supporting domains is insufficient to justify the costs of applying complex modelling techniques to these areas. Supporting domains could potentially be created using RAD tools, bought off the shelf where possible, or even outsourced. In any of these cases, it is the distillation process that allows us to identify a clean separation between sub-domains – a separation we need to maintain in our code base.

A really interesting article on this can be found here, the concepts from which originate in Eric Evans DDD book.

Domain Driven Design, CQRS and Event Sourcing

It’s taken quite a while, but I think I’ve had a bit of a revelation in really grokking the application of CQRS, Event Sourcing and DDD.

I’ve been considering the application of CQRS to a multi-user collaborative application (actually suite of applications) at the company I work. For some parts of the application, it is really easy to visualise how the application of CQRS would provide great benefits, but for others, I couldn’t quite figure out how the mechanics of such a system could be put into place, and yet maintain a decent user experience.

Let me try to elaborate with a couple of examples:

In one application I work on, a user may make a request for a reservation. I can see this working well under CQRS. The command can be issued expecting to succeed, and the response needn’t be instant; a message informing the user that their request is being processed, and that they will be notified of the outcome should suffice. The application can then take responsibility of checking the request against its business rules, and raising relevant events accordingly (reservation accepted, reservation denied etc). Supporting services could also notify users of the system when other events they might be interested in, become available.

For another scenario in the same application, a user may wish to update their address details. The application must store this information, however the application does not use this information in any way shape or form. It is there for other users to reference. When applying CQRS to this area, we start to see some oddities. A user receiving a notification that their request to update address is being processed seems ridiculous; there is no processing required here. In addition to this, this canonical example of “capturing intent” doesn’t really apply to our domain; in our domain no one cares why the user is updating their address, be it because of a typo, or because of a change of address. This information isn’t interesting in to any of the users of the system.

Then it hit me.

CRUD actions like modifying the contact address of an employee and other ancillary tasks – provide only supporting value in our domain. For all intents and purposes, the contact address of an employee is just reference data; it is there to support our actual domain. Arguably then, there is no benefit for modelling this interaction within our domain model. It’s quite the contrary in fact; diluting our core domain model with uninteresting concerns blurs the focus from what’s important. Paraphrasing Eric Evans’ blue book: anything extraneous makes the Core Domain harder to discern and understand.

Taking this idea further, there can be significant benefit in separating this kind of functionality from actions that belong to our core domain. In code terms, this means that our domain model will not have an “Employee” entity with a collection of type “ContactAddress”. This association isn’t interesting in our core domain. It is likely that it is part of supporting model which could be implemented quickly and effectively using any one of Microsoft’s (or any other manufacturer’s) RAD tools. 

In the big blue DDD book, I think Evans describes this separation as a generic sub-domain. In generic/supporting sub-domains there may be little or no business value in applying complex modelling techniques even though the function they provide is necessary to support our core domain. Alternatively, the core-domain of one application may become a supporting domain of another. In either case, the models should be developed, and packaged separately.

Our product, in its various forms, contains enough complexity in its problem domain itself, without complicating things further by tangling up the core domain with supporting concerns. I do not wish to be in the situation (again) where one application  needs to know the ins and outs of what is supposed to be another discrete application. If understanding the strengths and limitations of modelling techniques such as CQRS, Event Sourcing and DDD can help me achieve this, then I’m making small steps in the right direction!


NB: this post originated from an email discussion I had with a colleague. I’ve removed/replaced the specifics of our core domain (hopefully this hasn’t diluted the points I’m trying to make too much!)

    Becoming a better JavaScript developer

    So in my quest to becoming better with JavaScript, I’ve been reading a variety of books, articles, and blogs and I happened across the following site:

    The blog itself has a lot of good advice to offer in regards to both structuring your JavaScript into testable and reusable modules, as well as advice on how to apply BDD techniques in testing JavaScript. What really struck me however were the nice little touches on the website itself – a nice little welcome message that contains the usual “about” information, that only appears when you view the site for the first time; a tweet update side bar integrating with tweetboard; and a live chat window:


    The next cool thing I found on the same site can be seen here:

    Basically, the author identifies a “code-smell” and applies a pattern to aid maintainability. The cool thing here is the link to “view, run, & edit code” for each example:


    …which integrates with an site,, allowing the sample to be modified and run within the browser:


    Very cool stuff indeed.

    Object design in JavaScript

    Found this article today, and after subsequently checking out a few of the authors other posts (in particular, his excellent series on CQRS), I proceeded to add his blog to my Google Reader.

    I’ve not yet figured out a way that I’m happy with in order to write maintainable JavaScript. Much of the JavaScript I’ve written quickly spirals out of control with any non-trivial requirements – so I find articles like this invaluable in assisting my learning. Go check it out:

    Pagination in MvcContrib

    Over the next month or so, I’m hoping to write a couple of posts on some work I’ve undertaken in extending the original pager implementation that compliments the MvcContrib grid component.

    The work I’ve undertaken so far has focused on the following areas:

    Customisable Pager Rendering

    The existing pager is rather limited in the HTML rendered out of the box. Currently the only means to gaining more control over the output would be to inherit from the existing Pager, override a few methods (you’d probably need to dig into the source to identify which ones), and to implement a new HTML helper. Support for this approach is limited by the current unit tests making expectations based on the complete mark-up.

    The new pager will support complete customisation of rendering through custom templates, and

    Support for Numeric Pagination (in addition to Next/Previous)

    The existing pager component outputs as a “Next/Previous” style pager, with links for next/previous/first/last.

    The new pager implementation will allow a choice of pager, with Next/Previous and Numeric pagers provided OOTB.

    Separate Pagination Summary

    Although the existing pager provides a summary with localization support, its implementation is embedded within the current pager. I would like to break this component out and separate its unit tests from the pager implementation, such that this component is optional and can change independently from the pager.

    I’m looking for feedback on the work I’ve undertaken so far, and would welcome additional ideas and suggestions. The fork containing my work so far can be found here:


    Get every new post delivered to your Inbox.