Author Archives: craigcav

Edge Conference 5: London

On Saturday, I attended the 5th installment of the fantastic Edge Conference at the Facebook London offices in Euston. We had in depth discussions about complex problems and set about creating solutions and providing feedback to browser vendors to keep the web moving forwards. This is my third appearance at edge conference; my first was as a panelist at Edge 2 in NYC, and I’m fortunate enough to say that LiquidFrameworks has been very supportive, enabling me to continue to attend.

 

Front End Data Panel at Edge 5

Edge has no conventional talks – the focus of the conference is on productive discussion and debate, rather than presenting the experiences of a single presenter for the audience to consume. Two types of sessions are run: highly structured panel debates with pre-curated questions and intimate breakout sessions where small groups work through the finer details of a topic in depth. Every person present is provided with the opportunity to present an opinion, and to ask or answer questions raised during the event. To enable such a rich environment for discussion, several tools are used to surface the most relevant opinions in real time: slack, google moderator, twitter and most fun of all, throwable microphones!

The conference ran sessions on Security, Front End Data, Components and Modules and Progressive Enhancement. The breakout sessions also covered related themes; ServiceWorkers, ES6 Patterns and Installable Web Applications.

Security

Yan Zhu’s opening panel introduction revisited this critical topic for the web. She demonstrated how even simple functionality like emoji support can surface XSS vulnerabilities and enumerated several techniques developers can use to promote private and secure communications on the web. The panel discussed opportunities for promoting HTTPS usage and the blockers to HTTPS adoption; 3rd party content (such as ads) forcing mixed content, CDNs and of course, the tedious process of setting up SSL.

Front End Data

I was blown away by Nolan Lawson’s introduction to the topic of front end data. Storage in the browser is gaining a lot of traction and the number of complex use cases for front end data is steadily increasing as the lines between native applications and web applications becomes more blurred. The number of potential storage possibilities in the browser are quite overwhelming; ServiceWorker, ApplicationCache, IndexedDB, WebSql, File System API, FileReader, LocalStorage, SessionStorage, Cookie…and even notifications (as mentioned by Jake Archibald – notifications contain data fragments). I was particularly impressed by the npm in your browser demo which can stored over a gigabyte of data for offline browsing of npm. My biggest concern from this session was that even today there is no way for a developer to provision data storage for an application that is safe from browser eviction when the system is under space constraints. Hopefully this is something that is rectified as the Quota Management API evolves.

Nolan’s slides from this session can be found here.

Components and Modules

The components and modules session took a dual focus; web components and React components. Even with this dual focus, many common themes were discussed, such as performance, optimization, bundling and portability. Former best practice in this space (bundling components together for optimized delivery) were questioned; with the adoption of HTTP/2, unbundled modules provide granular cache invalidation without the same overheads of managing multiple connections as is the case with HTTP. Through intelligent servers, resources can be pushed into browsers based on previous usage patterns, optimizing for usage of resources that are commonly accessed together. It was discussed that tooling in this space should allow for migration over to HTTP/2 without causing an additional overhead in needing to maintain two ways to deliver the application. This conversation led well into the next topic: progressive enhancement.

Progressive Enhancement

Progressive Enhancement, it seems, is a topic always introduced with the question of “who would turn off JavaScript” and Edge Conference was no exception. Fortunately, the discussion led much deeper into this nuanced topic. The panel discussed the complexities of supporting ES6 and ES7 syntax changes, without breaking browsers limited to ES5 (and below). It was interesting to see that the “baseline” of support (below which everything breaks) is a very loose concept and is directly linked to return on investment.

Breakout Sessions

One of the things I love about Edge is the breakout sessions. These sessions provide an opportunity to explore many of the concepts raised in the panel discussions in more depth. With access to such a wealth of knowledge in the room, breakout sessions are a fantastic opportunity to seek insight into complex problems and work with vendors and standards bodies to smooth out rough edges. I attended sessions on Installing web apps, components and front end data. I’d love to see these sessions records in the future as there were parallel sessions I’d have loved to go to but missed out on, however the format of these sessions makes that quite difficult.

Wrap up!

As always, Edge Conference was a blast this year. If you missed out this time, all the videos of the conference will be available on the website and will be professionally captioned and with content search.

All in all, I really enjoyed attending Edge Conference and am looking forward to future installments!

ko.datasource–Enable simple binding to remote data sources

When using Knockout, we often want to bind to data that is retrieved from an AJAX request. In addition, it is not always efficient to populate the entire view model when the page loads. When adding paging, sorting and filtering of our data into the mix, if we’re not careful we can end up with a lot of accidental complexity and code duplication on our hands. To avoid that happening, I’ve created a small plugin for Knockout that makes interacting with remote datasources simple. Lets take a look.

The Bindings

I want my knockout bindings to be as simple as possible. Regardless of where I get my data, my view should be written as if it were completely unaware of where the data came from. In that vain, our markup should be the same as if we were binding to a standard knockout observable or observable array.

Let’s assume we want to display some data in a table. Our markup might look like this:

<table>
    <
thead
>
        <
tr
>
            <
th>Id</th
>
            <
th>Name</th
>
            <
th>Sales</th
>
            <
th>Price</th
>
        </
tr
>
    </
thead
>
    <
tbody data-bind="foreach: items"
>
        <
tr
>
            <
td data-bind="text: id"></td
>
            <
td data-bind="text: name"></td
>
            <
td data-bind="text: sales"></td
>
            <
td data-bind="text: price"></td
>
        </
tr
>
    </
tbody
>           
</
table>

Nothing too complicated, or unexpected. Just the basic foreach and text bindings.

Of course since we might expect a fair few results, we wish to paginate our data. Let’s add some simple bindings for this too.

<span id="pager">
    <
button data-bind="click: items.pager.first">First</button
>
    <
button data-bind="click: items.pager.previous">Prev</button
>
    <
span class="summary"
>Page 
        <
span data-bind="text: items.pager.page"></span
> of 
        <
span data-bind="text: items.pager.totalPages"></span></span
>
    <
button data-bind="click: items.pager.next">Next</button
>
    <
button data-bind="click: items.pager.last">Last</button
>
</
span>

Again, nothing too drastic – some simple buttons for pagination and an indication of what page we’re on. Do note however that we’re binding to a pager attached to the items. More on that in a moment.

The view model

The model is where the magic happens, it’s where any rich interactions will be specified. To keep our models DRY however, we probably don’t want to specify things like pagination in every view model we create. With the ko.datasource plugin, we can keep things simple:

var viewModel = {
   
items: ko.observableArray([]).extend
({
        //getAnimals is a data service to populate the viewmodel

       
datasource: getAnimals
,
       
pager
: {
           
limit: 3
        }
    })
};

We have an observableArray of items, just as if we were working with data in-memory, but two extenders have been applied to the array -  a datasource and a pager.

The datasource extender takes a single parameter; a function that will call into our remote data source. This function could, for example, use the ajax librarys of jQuery to call a webservice. We’ll take a deeper look at this in a second.

The pager extender takes a single parameter also; an object indicating how many items we would like to see per page. It will also attach itself to our observable array to expose additional pagination properties and methods. This is what the pager in our view is bound to.

The remote call

As previously mentioned, we can use our favorite library to call into whatever remote datasource we want. Let’s say we’re using jQuery’s ajax API:

function getAnimals() {
   
$.ajax
({
       
type: ‘GET’
,
       
url: ‘/my/remote/endpoint/’
,
       
data
: {
           
limit: this.pager.limit
(),
           
startIndex: this.pager.limit() * ( this.pager.page()1
)
        },
       
context: this
,
       
success: function( data
) {
           
this( data.rows
);
           
this.pager.totalCount( data.count )
;
        },
       
dataType: ‘json’
    });
};

The datasource (the extended observable array) is set as the context of the this keyword and that means we have access to the pager options. When data is successfully retrieved from our AJAX call, we can replace the data in our datasource by writing to the observable:

var observableArray = this;
observableArray(data.rows);
//or just
this(data.rows);

Additionally, since we’re using a pager, we should tell the pager how many results the server has in total so that it can figure out how many pages there are:

this.pager.totalCount( data.count );

An important thing to note is that the function we use here will behave like a computed observable, that is, it will run automatically whenever any of its dependencies change. This means that as our pager changes page, or when the row limit (records per page) changes, the remote call will be reevaluated in order to fetch the new data to show. This also means that if we pass dependencies from our viewModel as parameters to the remote call, they will also cause the datasource to be updated if they change. This is very handy as we can also use this feature for additional server-side filtering if needed.

That’s it!

Seriously, that’s all you need. Our datasource will evaluate when the view is first bound to it, and then will be reevaluated whenever our remote calls’ dependencies change.

Check out a live example here:  http://jsfiddle.net/craigcav/UzUBm/

And download the source here: https://github.com/CraigCav/ko.datasource

Enjoy.

Acknowledgements

None of this would’ve been possible without the inspiring work Ryan Niemeyer put into documenting KnockoutJS on his blog. In particular without the following two posts, this plugin probably wouldn’t exist.

http://www.knockmeout.net/2011/04/pausing-notifications-in-knockoutjs.html

http://www.knockmeout.net/2011/06/lazy-loading-observable-in-knockoutjs.html

Binding multiple event handlers to JqGrid

[UPDATE: As of version 4.3.2 (April 2012) JqGrid now uses jQuery events as described below]

If you’ve ever used JqGrid for anything more than it’s simple, out-of-the-box defaults, chances are you’ve come across problems related to how JqGrid handles events. As it stands (version 4.3.0 at the time of writing), the JqGrid API allows for one and only one event handler per user event. In particular, this approach can be a significant hurdle to constructing plugins to interpret user interactions with the grid. This post demonstrates an alternative approach using jQuery events, allowing for multiple handlers to be bound to the grid.

A full jQuery event plugin for JqGrid using this approach can be found on my github page: https://github.com/CraigCav/jqGrid.events

The JqGrid Event API

To handle events, JqGrid provides an API for specifying a callback to be executed when a named event occurs:

jQuery("#grid").jqGrid({
...
  
onSelectRow: function(id
){
      //do something interesting
 
   },
...
});

This callback can be provided as a option to the grid widget on initialization (as shown) or it can be specified after the grid has been initialized using the setGridParam method. Additionally, it can be set globally by extending the grid defaults:

jQuery.extend(jQuery.jgrid.defaults, {
...
 

   
onSelectRow: function (id, selected
) {
       
//do something interesting 
    },
...
});

Unfortunately this API limits consumers to being able to handle a single callback for each event. Given that as a developer consuming this API, I may wish to be able to provide default settings for handling an event (say, grid load for example) and I may also wish to provide instance specific options for handling the same event, this API is too restrictive to achieve what I require.

Let’s explore an alternative approach to handling user interactions.

jQuery Events

jQuery provides a standard suite of functionality specifically for registering behaviors that take effect when a user interacts with the browser. Of particular interest is the bind method; it allows for an event handler to be attached directly to an element.

$(element).bind(‘click’, function(){
   
//handle click event 
});

A key point to note about the “bind” method is that multiple handlers can be bound to an element. Each handler will be called when the event is triggered for that element.

Applying this mechanism could provide a means to achieve what we need (multiple handlers) that we currently cannot easily achieve using the JqGrid API alone. Unfortunately however, JqGrid does not currently execute any handlers attached in this manner, so our work isn’t over yet.

Triggering Events

We can use jQuery trigger or triggerHandler to work alongside our bind calls to ensure our events get triggered. Perhaps in some later release these methods will be invoked within JqGrid itself (I might submit a patch if I get around to it). Until then, we can wire up the triggers for each interesting event by setting the JqGrid options globally:

jQuery.extend(jQuery.jgrid.defaults, { 
    ...
   
onSelectAll: function (ids, selected
) {
       
$(this).triggerHandler("selectAll.jqGrid", [ids, selected
]);       
    },
   
onSelectRow: function (id, selected
) {
       
$(this).triggerHandler("selectRow.jqGrid", [id, selected
]);
    },
    ...
    
etc    
    ...
});

Each of the available JqGrid event callbacks are now used to trigger the appropriate event handlers. Instead of providing a single extension point for handling events, we can now register as many handlers for each event as we like using bind:

$(element).bind(‘selectRow.jqGrid’, function(event, id, selected){
   
//do something interesting 
});

$(element).bind(‘selectRow.jqGrid’, function(event, id, selected
){
   
//and something awesome
});

The full source of my jQuery event plugin for JqGrid can be found on my github page here: https://github.com/CraigCav/jqGrid.events

Simple inline editing with knockoutjs

A little while back, I stumbled upon a neat little trick to write pages with simple inline editing. I’m pretty sure I picked up this technique from Ryan Niemeyer’s excellent blog (although I can’t find a direct link), and I’ve seen it crop up in a few other places too. The approach has come in so handy that I’ve pieced together a little knockout binding handler to make wire up even simpler. Let’s take a look.

Inline Editing

Here’s I mean by when I talk about “Inline Editing”:

inline-edit

Example from http://addyosmani.github.com/todomvc/

The idea is that rather than having to navigate to a separate form for editing on-screen data, the user triggers an inline editor to appear instead (usually by single/double clicking), saving them time and overall improving their experience.

The Approach

The technique is that there are actually two versions of the editable element – one for viewing, and one for editing:

<div class="editor">
    <
div class="view"><a href="#"></a>Click to add</div>
    <
input class="edit" type="text" />
</
div>

Either the “view” element, or the “edit” element will be displayed, and the other will be hidden using CSS { display:none }. We can switch between the two by adding an additional CSS class to the “editor” element depending on the editing state of the view model property:

.edit {
display: none
;
}

.
editing .edit
{
display: block
;
}

.
editing .view
{
display: none
;
}

Knockout Model

We need triggers to toggle the editing state – one on double clicking, and one when we’ve finished editing. Let’s add these triggers to the view model:

var viewModel = {
   
//the item we’re editing
    item: ko.observable
(),
   
    
//track whether we are editing
    editing: ko.observable
(),
   
   
// edit an item
    editItem: function
() {
       
this.editing( true 
);
    },
   
   
// stop editing an item.
    stopEditing: function
() {
       
this.editing( false 
);
    }
};

Knockout Bindings

Since the editing state on the view model is an observable, we can use the CSS binding from knockout to apply the CSS class:

<div class="editor" data-bind="css: { editing: editing }">

We can use the text binding for the “view” element and the value binding for the “edit” input. We then need to trigger “editItem” on double click:

<div class="view" data-bind="
        event: { dblclick: editItem }, 
        text: item() || ‘Double click to edit’"
> 

…and we need to trigger “stopEditing” when we’re done editing. For simplicity, let’s use the blur binding here*:

<input class="edit" type="text" 
       data-bind="value: item, event: { blur: stopEditing }" /> 

And That’s it – Inline editing. If you’ve followed so far, you should now have something that looks like this fiddle.

*We should use other bindings here to detect the enter key being pressed, but lets keep things simple (see here for the appropriate bindings if you’re curious).

A little extra

Ok, so far so good, but what happens if we have a few of these editors in our page for different forms? If we apply this approach as is, we’d have two functions and one extra property for each inline editor. That could get out of hand quickly. Fortunately, we can apply a similar approach to the one I mentioned in my previous post – extending each model property that we want an inline-editor for:

ko.extenders.liveEditor = function (target) {
   
target.editing = ko.observable(false
);

   
target.edit = function 
() {
       
target.editing(true
);
    };

   
target.stopEditing = function 
() {
       
target.editing(false
);
    };
   
return target
;
};

Applying this extender to our model property will add the required observable to track the editing state, plus provide the trigger methods for toggling the state.

Going one step further still, we can actually have this extender applied as part of a binding handler – that way our model doesn’t have to care about the editor extender at all:

ko.bindingHandlers.liveEditor = {
   
init: function (element, valueAccessor
) {
       
var observable = valueAccessor
();
       
observable.extend({ liveEditor: this
});
    },
   
update: function (element, valueAccessor
) {
       
var observable = valueAccessor
();
       
ko.bindingHandlers.css.update(element, function () { return { editing: observable.editing 
}; });
    }
};

Link to full sample on jsFiddle.net

Simple client storage for view models with AmplifyJS and Knockout

From time to time, for a variety of different reasons, it can be desirable to use client storage to store data. There are many ways to utilize client storage but AmplifyJS is a very neat library that provides a consistent API to handle client storage that works in most browsers. This post explores a handy technique for utilizing client storage for knockoutjs based view models.

Why would I ever use client storage?

clientstorageexampleThe canonical example of this is using client storage to improve user experience, remembering user preferences or previously entered values such that the user doesn’t have to start all over.

A good example of client storage being applied can be found in the tutorials for knockoutjs. User progress in the tutorial is “remembered” and the option to restore is given when revisiting the site at a later time. The knockout tutorial site uses AmplifyJs under the hood to record user progress into client storage.

AmplifyJs

It’s pretty easy to use AmplifyJs on your site. After adding the appropriate script references, you can use amplify like this:

var item = { foo: "bar" };

amplify.store( "storeExample1", item );

The value “storeExample1” is the key that the item is stored against, and can be used to later retrieve the value as follows:

var item = amplify.store( "storeExample1" );

Now we’ve seen how easy it is to store and retrieve values using AmplifyJS, we can use this API for storing user data when it changes, and restore the value on coming back to the page at a later time. This can sometimes be easier said than done though; we have to remember to call amplify to store new values of the data whenever it changes.

Knockout to the rescue

One of knockout’s primary components, the observable, notifies subscribers of changes to it’s underlying data. It’s counterpart, the computed observable, is a function that is dependent on one or more other observables – reevaluating every time any of these dependencies change. This allows for us to easily call amplify to store data each and every time it is updated:

var target = ko.observable(amplify.store( "key" )); //populate from amplify

ko.computed( function() {

      amplify.store( "key", target()); //store new value on every change

});

target("some new value"); //setting the new value, triggering computed observable

Using this technique, we can keep our client-storage up-to-date on every change to an observable property.

One unfortunate side-effect of using this approach is that we now have to write a computed observable for every observable property we wish to store in client storage. Fortunately we can apply another knockout technique to keep our codebase DRY.

Extending Observables

Knockout includes a neat extension point that allows developers to easily augment knockout observables with additional functionality – extenders.

Applying this technique to our observable gives us a tidy way to apply client storage to any observable property:

var target = ko.observable( "default value" ).extend( { localStore: "key" } );

Any changes made to the observable value will be stored using amplify, and will be restored into the observable value when returning to the site at a later time.

The extender shown can be implemented as follows:

ko.extenders.localStore = function (target, key) {

    var value = amplify.store(key) || target();

      

    var result = ko.computed({

        read: target,

        write: function(newValue) {

            amplify.store(key, newValue);

            target(newValue);

        }

    });

 

    result(value);

 

    return result;

};

Here is a fiddle that demonstrates the localStore extender in action: http://jsfiddle.net/craigcav/QSCsK/

Code Kata: FizzBuzz–How did you fare?

In my last post, I introduced the concept of a code-kata (coding exercises) and gave an example problem to solve:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

For those of you who tried the example, how did you find it? Pretty simple right?

Did you make it look this easy though? Whoever recorded that video is nicely demonstrating the benefits of honing their development environment – he/she even goes as far as disabling their mouse to train themselves to use the keyboard more effectively.

How well was your code designed? Is it adaptable to change?

Let’s move the goal posts

In real life, requirements change all the time. We can easily practice writing adaptable code by throwing some new requirements into the mix.

Try to take these steps one by one, as if a client was drip feeding them to you – try not to read ahead too far!

  • Try extending the application to support another range of numbers, such as 15-175
  • Try extending the application such that a user could provide any range of numbers (such as from the console, or from configuration)
  • Try extending the application to support new rules – output “Baz” for numbers divisible by 4
  • Instead of printing the numbers to the console, try extending the app to write to a file

Put your money where your mouth is, show me the codez!

Ok, my C# kata can be found on my github page here: https://github.com/CraigCav/FizzBuzz. Over time, perhaps we’ll see more implementations of this appear on my github page in other languages, such as F#. Interestingly enough, I used this example to hone my environment; to learn the GIT commands (instead of using a GUI) by practicing them over and over with this small simple example.

Code Kata: Training for your mind

The concept of a code-kata is a simple one and its premise is borrowed from its martial arts counter part:

Kata (型 or 形 literally: form) is a Japanese word describing detailed choreographed patterns of movements practiced either solo or in pairs.

Wikipedia

In a code-kata, we practice coding problems to train our minds muscle-memory such that when faced with real-world coding problems, the solutions are at the forefront of our mind.

The general idea is to practice solving problems in new ways, new languages, or in new environments. By practicing techniques that are just outside of our comfort zone, we can push ourselves to learn. Additionally, we can use this same approach to tune our environment in an observable manner in order to reduce any friction in the way we are working.

 

FizzBuzz

I came across the FizzBuzz code-kata a few years back and I’ve seen a few slightly different versions of it in my travels. In fact, this “problem” even came up in an interview I had once, so practicing the kata definitely paid off!

Here’s the FizzBuzz problem description:

Imagine the scene. You are eleven years old, and in the five minutes before the end of the lesson, your math teacher decides he should make his class more "fun" by introducing a "game". He explains that he is going to point at each pupil in turn and ask them to say the next number in sequence, starting from one. The "fun" part is that if the number is divisible by three, you instead say "Fizz" and if it is divisible by five you say "Buzz". So now your math teacher is pointing at all of your classmates in turn, and they happily shout "one!", "two!", "Fizz!", "four!", "Buzz!"… until he very deliberately points at you, fixing you with a steely gaze… time stands still, your mouth dries up, your palms become sweatier and sweatier until you finally manage to croak "Fizz!". Doom is avoided, and the pointing finger moves on.

So of course in order to avoid embarrassment in front of your whole class, you have to get the full list printed out so you know what to say. Your class has about 33 pupils and he might go round three times before the bell rings for break time. Next math lesson is on Thursday. Get coding!

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

This Kata is best used to introduce the concepts of Test-Driven Development; Red-Green-Refactor! If you’re new to TDD, or even if you’re just looking to hone your technique, this is a neat little example to try, and there are dozens (if not more) ways to solve this problem.

Write your first test, watch it fail, fill out the implementation, make it pass, then clean up (refactor).

In my next post, I’ll spice it up a little. If you haven’t tried the Kata yet, don’t peek -  I don’t want to spoil your fun!

Applying Conventions in ASP.NET MVC

I recently came across a interesting post from @ntcoding demonstrating the flexibility and power of FubuMvc’s HTML conventions. The post demonstrates the benefits of applying custom conventions, and provides examples such as displaying a dropdown list of Enums whenever a view model has a Enum property.

Applying these types of conventions really help to DRY-up your code base, and typically this is something that FubuMVC shines at. That said, if you’re stuck using ASP.NET MVC, not all is lost; There are some handy extension points you can use to keep things DRY.

Keeping the Context

Let’s keep Nick’s example view model, as it works for a nice comparison for the different approaches. As a reminder, we’re using a model called CreateBookInputModel that looks like this:

[gist id=1384709]
    public class CreateBookInputModel
    {
        public String Title { get; set; }

        public String Genre { get; set; }

        public String Description_BigText { get; set; }

        public BookStatus BookStatus { get; set; }

        public IList<string> Authors { get; set; }

        public HttpPostedFileBase Image { get; set; }
    }

The great thing about Nick’s example is that it provides a nice variety of opportunities to demonstrate the application of conventions.

Editor Templates

The first variation I’m going to make from Nick’s Fubu MVC example is that rather than using Spark and listing out a label and input for each property on the model, I’ll call into ASP.NET MVC’s Editor Template Html helper to build up a view:

@Html.EditorForModel()

This helper will attempt to resolve a view for the model object. In this case, since there is no custom view for the CreateBookInputModel, ASP.NET MVC will use the default editor template for Object. Brad Wilson from the MVC team does a great job of summarizing the behavior and responsibilities of the default object template:

The Object template’s primary responsibility is displaying all the properties of a complex object, along with labels for each property. However, it’s also responsible for showing the value of the model’s NullDisplayText if it’s null, and it’s also responsible for ensuring that you only show one level of properties (also known as a “shallow dive” of an object).

Using the default ASP.NET MVC editor template this helper will render out an input for each of the properties on our model, wrapped in some additional markup to provide hooks for styling with css.

To keep the markup compariable, and to demostrate the first ASP.NET MVC extension point, let’s take a look at providing our own template:

Shared/EditorTemplates/Object.cshtml

[gist id=1384742]
@if (Model == null) {
    <text>@ViewData.ModelMetadata.NullDisplayText</text>
} else if (ViewData.TemplateInfo.TemplateDepth > 1) {
    <text>@ViewData.ModelMetadata.SimpleDisplayText</text>
} else {
        foreach (var prop in ViewData.ModelMetadata.Properties.Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm))) {
            if (prop.HideSurroundingHtml) {
                <text>@Html.Editor(prop.PropertyName)</text>
            } else {
                <p>
                    @Html.Label(prop.PropertyName)
                    @Html.Editor(prop.PropertyName)
                    @Html.ValidationMessage(prop.PropertyName)
                </p>
            }
        }
}

The only deviation from the default template here, is that I’ve replace the "wrapper" markup to be a paragraph tag.

String Template

Much like Fubu MVC, simple strings work straight out of the box, providing a simple input box for the user. You can customize this default freely by dropping your own Editor Template for string into your views. The default string template looks something like this:

Shared/EditorTemplates/String.cshtml

@Html.TextBox("", ViewData.TemplateInfo.FormattedModelValue)

Custom Conventions

The next interesting case Nick uses, is to override the default String convention to instead display a textarea for any property with a "_BigText" suffix. It is fairly typical in ASP.NET MVC so see a UIHintAttribute applied to each property that requires a custom view to be rendered:

[UIHint("BigText")]
public String Description_BigText { get; set; }

By applying this attribute, ASP.NET MVC will try to find a view called "BigText" to use for this property, before falling back to the default String template:

Shared/EditorTemplates/BigText.cshtml

@Html.TextArea("", ViewData.TemplateInfo.FormattedModelValue.ToString(),
                  0, 0, new { @class = "text-box multi-line" })

This approach works just fine, however we now need to litter our model with this attribute for every property with a "_BigText" suffix – an approach that is a little too error prone for my liking. Lets see if we can do better.

ModelMetadataProvider

Using the UIHintAttribute is one way we can provide the ASP.NET MVC additional metadata to guide it to finding a view to render the view model property. ASP.NET MVC internally uses the DataAnnotationsModelMetadataProvider (by default) to provide all the information about a view model, including the metadata from the attributes that a view model may be decorated with. Although attributes provide a easy entry point for providing metadata, it’s easy enough to provide an overridden implementation to also include additional metadata, based on some custom convention:

[gist id=1384744]
public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider
{
    protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName)
    {
        var attributeList = attributes.ToList();
        var modelMetadata = base.CreateMetadata(attributeList, containerType, modelAccessor, modelType, propertyName);

        ProvideTextAreaForBigText(modelMetadata, propertyName);

        return modelMetadata;
    }

    private void ProvideTextAreaForBigText(ModelMetadata modelMetadata, string propertyName)
    {
        if (propertyName != null && propertyName.EndsWith("_BigText") && string.IsNullOrEmpty(modelMetadata.TemplateHint))
            modelMetadata.TemplateHint = "BigText";
    }
}

By inheriting DataAnnotationsModelMetadataProvider, we continue to support custom attributes to add metadata about our model (such as UIHint), but I’ve added an addition step here to provide a "TemplateHint" to any property with the "_BigText" suffix. The TemplateHint tells ASP.NET MVC to try to find a view called "BigText" to use to render this property. If the view is not found, it will simply falling back to the default template for this object (String). The last requirement to use this metadata provider is that it will need to be either registered in ASP.NET MVC’s DependecyResolver, or in the static registration point ModelMetadataProviders.Current on application startup.

 

Getting smart with Enums

One particularly nice part in Nick’s post, is where he provides a convention such that if a view model property is an Enum, a select input will be provided with each of the Enum items.

Doing this with ASP.NET MVC isn’t too hard either mind you. Again, we can fall back on the ModelMetadataProvider extension point to apply this convention:

[gist id=1384749]
private void ProvideSelectListForEnums(ModelMetadata modelMetadata, Type modelType)
{
    if (modelType != null && modelType.IsEnum && string.IsNullOrEmpty(modelMetadata.TemplateHint))
    {
        modelMetadata.TemplateHint = "SelectList";
        var values = Enum.GetValues(modelType).Cast<object>();
        var items = values.Select(entry => new SelectListItem { Text = Enum.GetName(modelType, entry), Value = entry.ToString() });
        var selectList = new SelectList(items, "Value", "Text", modelMetadata.Model);
        modelMetadata.AdditionalValues.Add("SelectList", selectList);
    }
}

This is slightly more involved that the previous convention, but let me walk you though it. First of all, we only want to apply the convention if the model property is an Enum. If it is, provide a "TemplateHint" to tell ASP.NET MVC to try to find an appropriate view for the select list – lets call this view "SelectList" so that it’s discoverable by other developers. Next, retrieve the values for the select list, and use them to create SelectListItems. Finally, add the select list values as additional metadata for our model, so that it is available in the view.

Speaking of the view, it looks a little like this:

[gist id=1384753]
@{
    var modelMetadata = ViewContext.ViewData.ModelMetadata;
   
    var values = modelMetadata.AdditionalValues.ContainsKey("SelectList")
        ? modelMetadata.AdditionalValues["SelectList"] as IEnumerable<SelectListItem>
        : Enumerable.Empty<SelectListItem>();
}
@Html.DropDownList("", values, "Choose..")

Let’s wrap this up – Comparison and Final Thoughts

So we’ve pretty much covered off the building blocks necessary to apply a DRY-er, convention-driven approach to building MVC apps. But wait….what about file uploads and string collections – Nick provided these in his example! You’re right – I didn’t bother implementing these, mainly because they follow exactly the same pattern already laid out above. I’ll leave these as an exercise for the reader (and am happy to take pull requests).

What I do find interesting is that following this approach in ASP.NET MVC, a few of Nicks concerns are addressed:

  • What happens if the markup is complex and is not so easily created in code?
  • What happens if need to apply specific classes?
  • What about when I want to override conventions – what pitfalls await me?

As views can be used to specify the markup for our conventions, complex markup is not so much of an issue, as we have the full power of the MVC view engine at our disposal. Additionally, since the existing ASP.NET MVC extension points still apply, it is not too difficult to handle specific cases where the conventions should not apply. For example, we can override our convention for a given property by providing a UIHintAttribute – it will take precedence over the convention, and all is happy in the world.

I will mention however, that something much, much more favorable about using FubuMVC’s html conventions is that it is easier to break out out conventions into standalone, pluggable parts.

This gets rather tricky in ASP.NET MVC since the ModelMetadataProvider is a designed to be a singly registered component. Of course, as Nick points out, you can always just use fubu mvc’s html conventions in ASP.NET MVC ;)

More refactorings from the trenches

A little earlier this week I came across these JavaScript functions while visiting some code in the application I’m working on:

function view_map(location) {

    window.open(http://www.google.co.uk/maps?q= + location.value);

    return false;

}

 

function view_directions(form) {

 

    var fromLocation = jQuery(form).find(".from_point").get(0);

    var toLocation = jQuery(form).find(".to_point").get(0);

 

    window.open(http://www.google.co.uk/maps?q=from:’ + fromLocation.value + ‘ to:’ + toLocation.value);

    return false;

}

 

These methods are used to open links to Google maps* pages. 

Take note – to consume either of the JavaScript functions, you have to provide specific DOM elements and the form needs to have other fields stashed away the mark-up.

So let’s see how these were being consumed:

 

<a href="#" title="Click to show map" onclick="view_map(jQuery(this).closest(‘.journey’).find(‘.to_point’).get(0));" >To</a>

 

<%–and much further down in the markup–%>

 

<input name="FromLatLng" class="hidden from_point" type="text" value="<%= Model.FromLatLng %>"/>

<input name="ToLatLng" class="hidden to_point" type="text" value="<%= Model.ToLatLng %>"/>

 

Yuck. There’s not a lot going for this code.

Particularly nasty is that code will traverse the DOM looking for particular fields. It then builds up a query string based on the field values, and once clicked, open the link in a new window.

 

Let’s try again.

In this example, the values eventually passed to the JS function,to construct the URI are actually known upfront. So instead, let’s build a small helper extension to construct our Uri from a given location. We’ll hang this helper off of the UrlHelper for convenience:

 

public static string GoogleMap(this UrlHelper urlHelper, string location)

{

    var uri = new UriBuilder("http://www.google.co.uk/maps")

                {

                    Query = string.Format("q={0}", HttpUtility.UrlEncode(location))

                };

 

    return uri.ToString();

}

 

Then, let’s consume it:

 

<a href="<%= Url.GoogleMap(Model.ToLatLng) %>"  target="_blank" title="Click to show map">To</a>

 

This is much less brittle as we no longer traverse the DOM for no reason. We can also remove the unnecessary JavaScript functions, instead using the target attribute of the anchor to provide the same functionality.

 

 

 

*Google provides means to provide maps inside your application and you should use them – using the approach shown above directs users away from your site.

A refactoring from the trenches

A little earlier this week I came across this JavaScript function while visiting some code in the application I’m working on:

 

function filter_form(form) {

    var form = jQuery(form);

    var action = form[0].action;

 

    //get the serialized form data

    var serializedform = form.serialize();

  

    //redirect

    location.href = action + "?" + serializedform;

}

 

The function takes a form, serializes it, and adds its values to the URL, and redirects to it. I found the need to do this is a little odd, so I had a look at the calling code:

 

<form action="<%= Url.Action<SomeController>(x => x.FilterAction()) %>"

method="post" 

class="form-filter">

 

       <%–some input fields here–%>

 

        <a href="#" onclick="var form = jQuery(this).closest(‘.form-filter’); filter_form(form); return false;"

        title="Filter this"

        class="ui-button ui-state-default" 

        id="filter-form"

        rel="filter">Go</a>

 

</form>

 

The exhibited behaviour is that clicking the link (styled as a button) will cause a redirect to the target page with the addition of some query parameters.

Although we can see the developers intent (providing filters as part of the query string), the implementation leaves a lot to be desired.

Let’s see if we can do better.

 

We have a couple of clues here already; the form represents “search filters” and so is idempotent (i.e., causes no side effects).

 

Instead of a JavaScript driven redirect, why not submit the form using HTTP get:

get: With the HTTP "get" method, the form data set is appended to the URI specified by the action attribute (with a question-mark ("?") as separator) and this new URI is sent to the processing agent.

 

<form action="<%= Url.Action<SomeController>(x => x.FilterAction()) %>" method="get">

       <%–some input fields here–%>

<button title="Filter this" type="submit" value="Go">Go</button>

</form>

 

This way we get the same behaviour, minus the unnecessary JavaScript.

Follow

Get every new post delivered to your Inbox.