Does your LMS play nicely with others?

With the recent release of Learn with Mobile’s latest RESTful and webhook APIs, I found myself feeling very grateful that open interoperability and integration have started to become standard practice for leading cloud service providers.

Yes there is still software in every industry where its vendor continues to refuse to play nicely with others.  If you work in a large company or SME its probable that your LMS still fits this category, even if it’s only a few years old. But leading newer services all seem to understand the value of sharing and being open with each other.

Sharing and APIs have become a “must have” feature for the L&D thought leaders that will guide learning software choices in the next few years.  But with so many large providers in the learning industry still lagging behind, it’s worth sharing some history that helps us understand the risk of accepting this behaviour.

It’s not so many years ago software industry giants almost succeeded in taking this openness and opportunities away from not only learners, but all businesses as IT started to grow in its impact.  We all need to learn from past mistake to empower those who will be creating the future.

An explosion of choice

It’s fair to say world is full of apps.  Just between Google’s Play Store and Apple’s AppStore there are now more than 5 million apps available for you to choose from that can do everything from keeping you connected with friends, to ordering your pizza.

Likewise, over the last decade we have also seen an explosion in online cloud services as a replacement for traditional software for both business and individuals, many of which provide free options for personal or small business use.

The combined effect of these two trends have had on software in the last ten years cannot be overstated.  Between them they have helped to return our use of technology to its roots where everything is designed to do one job, but sets out to do it really well.

In theory now, we can always choose the best tools to solve each problem or goal we have.  But this only works if everybody plays nicely together.

Like children we all started by playing nicely

There has never been a time in technology where it’s been more important for everyone in the technology playground to play nicely together.  What may surprise you though is that this is not a new problem at all.  In fact it’s a problem Ken Thompson and Dennis Ritchie had already solved in 1969 as they were working on one of the most influence technologies of all time: Unix.

The theory used in the design of each app in Unix was remarkably similar to the design of today’s apps and cloud services: do one job really well, and then share with others so they can pick up where we left off.

In Unix the sharing was done through pipes and streams, now days apps share through RESTful APIs, and web hooks, but the core principles of well behaved software is the same.  Unfortunately we had to go through decades of badly behaved companies who didn’t want to share, before we could get back here again.

If you first got involved with PCs in the ’90s or 2000s it may be hard to believe that all software started off in such a friendly environment.  But software has its origins in an environment where everybody listened to one another, built on each other’s ideas, and everybody would always play with anybody else.  Much like young children in a playground, the only things that really mattered was if we were going to have fun while working together.

Playground bullies and the teenage years

As the IT industry matured and PCs started appearing offices, houses, and homes, software hit what is probably best described as its teenage years.

Large technology companies started to create huge pieces of software that did too much.  Software that didn’t focus on doing a single task well, but rather on doing lots of things, just about OK.  The idea here was simple, if we do lots of things OK, people will start to rely on us for everything.  Once they start using us for stock, they will have no choice but to use us for their order processing, and their accounts, and… Or once they start using us for HR they’ll also have to use us for e-learning, and facility management, and…

Why would anybody use an “OK” service if a great service was available as an alternative?  Well most of this software started off by being really good at just one thing.  But vendors started to bolt on additions that were at best “OK” and at worst unusable.   Before long like a teenager, applications from major vendors stopped talking to everyone that wasn’t in their friendship group.  The once open sharing became replaced with grunts, groans, and gestures and simple refusal to communicate or share anything with anyone.  The goal for companies selling software was that once you had committed to one solution from a vendor you were locked in to them.  No longer could you choose the best tool for the job and have everybody get along.  Now you simply had two choices: use your existing vendor for everything, or pay somebody to input and manage all your data (at least) twice so you had the right information in each piece of software you used.  Commercially for those producing the software, this plan worked well in the short term and a lot of money was made.  For everyone else in business though it meant IT become nothing but a growing headache with its unsatisfied demand for more money.

It’s was these teenage years that created giants such as SAP, Microsoft, Sage, and similar large companies many of which went on to face anti-trust cases throughout the world.  They controlled our HR, our Office Suites, our ERPs and CRMs, our accounts, our learning, and they wouldn’t even work with each other, never mind with anyone else

As the internet gained more attention, other companies started to pop up with friendly pubic images, such as Google, Facebook, and a re-vamped Apple, that spoke nicely about “playing with others” but only if you played the games they wanted, how they wanted you to, and you let them control your data and every aspect of the game.

Whether an old-style bully that kept demanding more and more of your lunch money, or a new style one that was friendly as long as you did what they asked, and then shared your secrets behind your back; these IT companies managed to hurt all industries at a time where technology should have been enabling people, not restricting them.

By actively choosing to refuse to work with anyone else, the giants of the IT industry took away consumer choice and created a set of playground rules that still cause problems and impact productivity in many businesses today.

The situation got so bad that the open source movement organised itself as a world-wide activism to fight against these trends and create software built on openness and sharing.  I, like many others, poured countless hours year after year into open source.  We shared a genuine concern that consumer choice and freedom was it risk, and if we did nothing would be entirely removed from technology within only a few years.  We started driven by fear, but became empowered by sharing.  We thrived on each other’s ideas.  We knew we could make a difference by working together.  We wanted to ensure the idea of technology that was open and shared with others never died out.

Reaching adulthood and Restoring Choice

As software reached its adult years, empowered by the surge in internet usage, and the renewed focus on sharing from open source as its activists entered businesses, people started demanding technology that worked together again.  An environment where small companies with big ideas to could start succeed was created.  People started to talk to each other and provide services designed to do one thing, really, really, well again.

In this more friendly environment ideas started to snowball.  Changes took hold that led to todays 5 million plus apps and numberless cloud services.  More importantly these changes once again restored genuine choice to both businesses and individuals on how to use technology.

Championed by services such as Xero,, and Learn with Mobile.  Great software that does one thing really well again, has for a few years now even started to disrupt industries previously dominated by bullies.  With modern services using open RESTful APIs, easy to use web hooks, to talk to each other.  A new attitude has emerged that sees sharing and interacting with each other as a feature, not a risk.

There’s even websites such as “If this then that” ( that exist simply to connect all these friendly services together in a way that works exactly how you want it.  Personalised experiences are becoming the norm.

Some of the giants have started to react by being more open and playing nicely again.  Microsoft is a great example of a bully company that has turned around not only its image but also its behaviour and services.  Once an example of the worst behaviour in IT, it is now an example for good to others who were once its peers in the playground.  But not everyone wants to change, and it’s going to take a long time for all of the giants to do make the transition into the new reality.

What does it mean for me?

Whether your looking at software to solve your learner’s next problem, improve your business performance, create a coaching culture, replace your LMS, or really looking into any software or service, you should ask potential vendors one key question “how well does this play with others?”  People will be used to this question by now, so it shouldn’t surprise anyone.  The historical giants are falling way behind others in this support.  But what should you do if the service you are looking at doesn’t provide an open API for others to integrate with? Simply move on and find something that does!

Even if the unfriendly software has a killer feature or two, if it won’t talk to others you will end up severely limiting your vender choice and wasting a lot of money.  What’s worse, by getting locked in you’ll find yourself falling ever further behind your competition in the years to come.  Far better to find a service that does almost everything you want well and invest in working with them to create a better alternative to that killer feature than to get locked in all over again.

That’s what the thought leaders in software recommend.  It’s what the thought leaders in L&D recommend.  And the same advice is finally being heard from thought leaders in all industries.

Technology, like good learning, should empower, not restrict.

Don’t let MAJOR version number worries stop you using Semantic Versioning (Semver)

Why Should I Use Semantic Versioning?

Semantic Versioning (semver) is specification for version numbers of software libraries and similar dependencies.

Its rules are not new, and are similar to how most library version numbers have been managed for years. However the idea of semver is that if libraries use exactly the same rules around version numbers then developers using libraries can know about library changes and compatibility direction from the version number.

The basic details are:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. 1.MAJOR version when you make incompatible API changes,
  2. 2.MINOR version when you add functionality in a backwards-compatible manner, and
  3. 3.PATCH version when you make backwards-compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

You can read the full specification at

In recent years semver has become very widely used, and in some spaces, expected. If you and the users of your library want to benefit from using a widely recognised version numbering system, I would recommend using semver above other system available due primarily to its significantly wide adoption at present.

Although semver will be very similar to the system you are using at present, as you apply its rules strictly you will probably find you made more backwards incompatible changes than you realised. It’s also likely that in the past you used the MAJOR version number as a marketing version number rather than strictly as an API incompatibility indicator. You’ll probably then find that your historical version number and methods has found its way outside of the pure library space to – with packages and even completed software applications matching version numbers to libraries.

So What are the Major Issues to Expect when Adopting Semver

Most the major issue people experience switching from their old versioning system to semver involve the major version number and API incompatablity.

Here are a few of the most common along with my recommendations on how you can avoid the issue or changes your processes to minimise its impact.

Our major version number is used for marketing. We want people to know that version 3.x.x is a very big improvement over 2.x.x

This is an important one as in many cases if semver is accepted and no other policies are changed, you could end up hitting version 42.x.x before you’ve had time to stop it. At that point you’ll have a very awkward choice to make to get your major version under control, and probably a lot of commercial pressure to “roll back” the number and start using the major version number for marketing again.

As you reflect on your own internal policies you may be tempted to come up with a “semver like” versioning system along the lines of MARKETING.MAJOR.MINOR.PATCH. Don’t do it. While I agree that style of numbering achieves what you want, you lose all the benefits of adopting semver if you don’t adopt it exactly and strictly. Remember your library was already using something like semver before you even heard of semver.

Our library is used in a few applications but we really like the freedom of breaking our APIs as we need to right now, so we won’t call out library 1.0.0 yet, well stay at 0.x.x.

It can be tempting to allow yourself freedom for as long as possible with a 0.x.x version. Maybe your library “is mostly used internally” or maybe “we’ll make it version 1.0.0 when we know the timeline for 2.x.x”.

The truth is the benefits of semver only really kick in after you reach the magic 1.0.0 version number. We can all recall open source projects that still sport 0.9.x version numbers after more than 10 years of development, because people haven’t yet got to a point where everything is perfect.

Remember that the version 1.0.0 is really for users of your library, not for you as the creator. Once you hit version 1.0.0 you are telling people that they should use your library. It functions as it should. They can benefit from it. They can use it in production.

If your library isn’t at that stage yet, then yes stick to a 0.x.x scheme, or a 1.0.0-alphax scheme, but in my view “release early, release often” isn’t about releasing incomplete software before it’s useful, it’s about release software as soon as its useful, and allowing its users to be involved in its evolution. This is nowhere more true than in library development.

We just released version 3.0.0 but version 2.32.0 actually had all the new developments in it. All we did for 3.0.0 was remove deprecated classes.

Unless you use completely separate branches for 3.x.x. and 2.x.x and a long term internal or public 3.0.0-alphax scheme for your next major release, then yes the x.0.0 releases can feel a lot less exciting than they used to.

I said before that the 1.0.0 release is for your libraries users, not for you. Thankfully 2.0.0 and later x.0.0 release are actually better for you than your users. For your users the version changes clearly warns them of incompatibilities that they may now need to work with. For you it represents the liberation from having to maintain and continually consider your depreciated APIs in everything you do. You took the worries of “how will it actually work in the wild” with you previous 2.32.0 release. So sit back and enjoy the x.0.0 releases.

I want to follow semver strictly, but how do I define an incompatible API change?

There are two general ways to define an incompatible API change (sometimes called a breaking change):

  1. Binary Breakage
  2. Source Breakage

Which one of these you consider appropriate for the management of your API depends on how you make your library available to your user. For example, if your library is managed through installation into a system wide library area (e.g. the Global Assembly Cache (GAC) for .NET or /usr/lib for *NIX) then you should use binary breakage because your distribution method allows post-compile substitution of the library into compiled applications.

More commonly now libraries are designed to be shipped as part of a bundle for a specific application (ClickOnce installs on Windows, ipk files on iOS, .apk files on Android, etc.) this means that an individual application does not have the ability (or at least the design intent) for its libraries to be post-compile substituted. So in this case you should use source breakage to define your API computability. On .NET many libraries now use Nuget to distribute the libraries, this is very much in line with the bundling of libraries with software, and so if your library is designed to be managed through Nuget you should consider Source breakage as the best measure of incompatibility.

A good discussion on some of the ways your API changes can cause source/binary breakage in .NET can be stackoverflow here:

At a minimum you must consider source breakage, and therefore increase your major version number, if any application using your library will require any source code changes before it can recompile after updating from the previous version of the library. Changes in behaviour of a call from one version to the next cannot all be covered (otherwise in theory every bug fix changes behaviour so may be undesirable to someone which would mean increasing the major version number for just about every patch change) so use your common sense. If any current appropriate use of your API will behave worse rather than better after an update, I would consider that to be a breaking change.

I can manage the deprecation process for my concrete and base classes but everything I do to an Interface results in an incompatible API change.

If you are using Interfaces as part of the abstraction of your library (and generally you should be see my post on Dependency Inversion) then you will run into a really frustrating reality – everything you do to an interface will basically end up being a breaking change in your API.

Yes you can also provide concrete base classes that most people subclass to fulfil the interface, and yes you may have updated the base class with sensible default behaviour, and yes this does mean that most of your users (if not all in practice) have no code to change to be able to recompile after a library update. But some might and this makes it breaking. The fact you exposed an interface to allow for any implementation of those abstracted needs by classes outside your library does mean your users have got a lot of flexibility in the abstraction, but with semver it also means every change you make to an interface really is a breaking change.

You could use techniques like starting to build version numbers into your interface or namespace names to avoid breaking changes if you have to – but that will create far more work for you and your libraries users than you save in the long run. Either accept the change and bump the major version number, or postpone the change until the next planned major version number change. Either way you may very well find you’ll have to plan changes to your interfaces a lot stricter after adopting semver than they were before.

A final tip you may want to consider if you have a lot of “standard” interfaces as part of your library may be to place these into their own library separate to the rest of your code. You can then use your package manager (e.g. Nuget) to make the main library dependent on the interface library/package. You would then be able to bump your “interface library’s” major version number correctly, but keep tight control over the major version number of the main library people use. Doesn’t solve everything, but for many it is a good balance to work around a very specific concern when using semver.

I use nuget (or a similar packaging tool) but how do I handle changes to version numbers that only affect the package not the library itself?

Semver versioning does not make specific provisions for packing tools so its down to the packing tools themselves to recommend an appropriate approach. Like most packaging tools Nuget’s recommendation is to use the same version number as the library being packaged. This makes sense and avoids a lot of confusion for end users, however it does leave the question: what do I do if I have to put out a new version of the package (e.g. because package scripts or metadata has changed) but the underlying library hasn’t changed version number.

Some packaging systems, such as NetBDS’s pkgsrc, handle this by having a separate package version numbers that are appended to the official version number to make the full version number for the package (see Where this is available you should make use of these package version numbers as a sequential number and consider it separate to the libraries own semver version number. It is good practice to reset the package number to 0 after the libraries own version number changes however.

In the case of Nuget there is not yet a separate package version number, so if you want to stick to the guidelines of using the same version number as the library you’re going to have to make a decision if you find a package only change that needs to be released. If you have control over both the library and the package, then consider the change the same as any other “patch” and bump the patch number. If you don’t have control over the libraries version number, you are going to have to decide if you can risk updating the package “in place” without a version number bump – not recommended as existing users will not get the changes and user support can be confusing with multiple package versions with the same name – or if you may need to temporarily move the package away from semver versioning for the package (not the library) to give a number x.y.z.p where p is the package revision number, until the next relese of the library comes out and lets you move back to matching the libraries semver version.

My version number used to represent sideways compatibility with closely related libraries, how can I keep this while using semver?

Semver version strings are used to represent a single aspect of version control – API compatibility. Other features that version strings can be used for – such as marketing numbers (covered in detail above) and sideways computability to specific versions of other libraries is not included in the version number design. This means you’ll need to handle it a different way.

You may choose to follow semver rules in how your version string is constructed, but match your release cycle to the original library. Although you normally would only reflect internal API changes in your library’s version number, not those of its dependencies, if you feel your library is tightly coupled and needs to match the same version number, then you could reflect this in your version number and mirroring it in your version numbers. This can be a significant help to your libraries user’s some time especially if there may be a slight delay between the dependent package being updated, and your own becoming available that’s compatible with it.

In cases where the coupling is looser, you probably don’t want to be tied into the release model and version numbers of the other library anyway, so now would be a good time to break it. Include compatibilities in the package metadata description, release notes, or project website. If your using a package manager for distribution that manages inter-package dependencies anyway, the your end user is unlikely to notice the difference anyway and will soon adjust if they did use your old version numbering style anyway.



Above I’ve tried to tackle some of the major worries around semver and controlling the major version number. If you have anything to add please feel free to do so in comments or direct feedback.

As someone involved in both creating libraries and package management systems for many years, I can see only good in a standardised version numbering scheme such as semver becoming as widely used as possible. Yes they have their problems, but every choice in versioning is a compromise at some point. It would be great a few years from now if we’re looking back at times where everyone managed their version control strings in different ways as a way of the past. But this only happens if everyone plays their part and sticks to the standards strictly and completely.

Should you adopt semver? Yes you should– but make sure you plan for its adoption properly and change your internal processes to match. This way you’ll be able to look back on the move to semver as an empowering choice to join developers manage dependencies collectively, and not as the point your project lost control of your version numbers.


Universal Software Principles

Universal Software is software that can be used natively on any device and guarantees universal reuse, extension, and maintenance.

The four fundamental principles of universal software are:

  1. The software must run natively on any hardware, Operating System, and network (Environment Independence Principle).
  2. The software must allow reuse of complete or part functionality as modules to create new applications (Modular Reuse Principle).
  3. The software must allow extension of its functionality by addition, enhancement, or replacement of its functionality by new modules or plugins (Inject and Extend Principle).
  4. The software must use a single set of source code that is permissive for maintenance by software engineers other than its original authors (Open Maintenance Principle).


These principles can be further understood by the following guidelines:

Environment Independence Principle

Universal software must run natively on any hardware, Operating System, and network.

This should be understood to mean the software:

  1. Must run natively on any device to which it is deployed.
  2. Must not require particular hardware to operate.
  3. Must not require a particular Operating System (OS) to operate.
  4. Must work in both stateful and stateless environments.
  5. Must not require a network or internet connection to perform non network tasks.

Whenever possible the software:

  1. Should also be available for use within a web browser (separate to the native applications).
  2. Should make use of specific hardware capabilities of the device it is running on where it helps achieve the software’s purpose.
  3. Should follow applicable guidelines and recommendations of an OS when running within that OS.
  4. Should cache non-senstative data to allow continued use of the software in environments without network or internet access.
  5. Should include platform independent version of any platform dependent code to allow the software to be executed on new platforms that were not available at the point of release.

Modular Reuse Principle

The software must allow reuse of complete or part functionality as modules to create new applications.

This should be understood to mean the software:

  1. Must provide all its functionality as a reusable dynamic link library or package.
  2. Must self initialise its data store and other dependencies on first use.
  3. Must provide complete functionality including CRUD operations and user interface.

Whenever possible the software:

  1. Should use the Mvpc Command pattern to allow a containing application to display its functionality alongside that of other modules.
  2. Should be suitable for use as a plugin at run time in environments that allow plugins.

Inject and Extend Principle

The software must allow extension of its functionality by addition, enhancement, or replacement of its functionality by new modules or plugins.

This should be understood to mean the software:

  1. Must allow any service it provides to be extended or replaced by another module.
  2. Must allow any service it invokes to be extended or replaced by another module.
  3. Must allow any user interface screen to be extended or replaced by another module.
  4. Must allow the replacement of a single service or interface screen without requiring related services or interface screens to be replaced as a result.

Whenever possible the software:

  1. Should allow functionality to be disabled at runtime through configuration.

Open Maintenance Principle

The software must use a single set of source code that is permissive for maintenance by software engineers other than its original authors.

This should be understood to mean the software:

  1. Must use a single source code for all functionality across all platforms except those files or classes that directly interact with the hardware or Operating System.
  2. Must provide documented APIs for all public classes and members.
  3. Must not directly create or require software patents.
  4. Must be written in a programming language with a formal ECMA and/or ISO standard that makes it free to implement and has an open source compiler available.

Whenever possible the software:

  1. Should only use version controlled standardised APIs between packages.
  2. Should be extendible in a programming language different to its original language.
  3. Should be made available under a permissive open-source license.


Dependency Inversion Principle (DIP) is much more than using the technique of Dependency Injection

What is the Dependency Inversion Principe (DIP)

The Dependency Inversion Principle (often referred to as DIP) is one of the five basic principles of object orientated programming and design known as SOLID.

The principle states:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on abstractions.

To see how code can be made to comply with this principle lets look at the relationship between two classes in an example:

public class Repository
    public void DoSomething()
        // Do something.

public class Processor
    private Repository repository;

    public Processor()
        repository = new Repository();

    public void ProcessData()

        // Other tasks...

In this example the Processor class has a dependency of Repository for two reasons:

  1. Processor depends on the functionality of Repository to meet its own promised functionality.
  2. Processor creates a new instance of Repository and is responsible for its lifecycle.

The temptation when reading these reasons is to try and tackle the second point (which we’ll do later when we talk about Dependency Injection), but its actually the first point that is most important when it comes to the Dependency Inversion Principle.

As it currently stands Processor is so dependent on Repository that any change to Repository could directly affect the ability of Processor to fulfil its promised functionality.  At a minimum any change to Repository will require a review/retest of Processor to identify any impact from the change. So lets go about removing the dependency through abstraction to solve this.

First we need to understand the functionality required of Repository by Processor and state those requirements as an interface/abstract class:

public interface IRepository
    void DoSomething();

public class Processor
    private IRepository repository;

    public Processor()
        repository = new Repository();

    public void ProcessData()

        // Other tasks...

After this first step Processor is no longer dependent on the functionality of the Repository class (forget the new Repository() statement – we’ll address that in the next section).  Instead has stated in IRepository its requirements for repository functionality and thereby become dependent on the abstraction of its needs as represented by the IRepository interface. This was our goal.

This next point is one that is often misunderstood – the owner of this IRepository interface is now Processor.  It is not Repository.  Many people get this ownership backwards and set about defining IRepository by starting with Repository and exposing all the functionality of the Repository class through the new interface. Once the interface is then in place they would modify Processor to consume that interface.  The problem with doing that is that we would not have inverted our dependency to be based on our needs, but simply swapped our dependency of a concrete definition of Repository and its functionality for an abstract definition of the same functionality, still owned by the detail Repository and still exposes Processor to changes in Repository and the interfaces it owns. The interface itself does nothing to invert our dependencies. Given that the LSP principle of SOLID already allowed us to replace Repository with a subclassed implementation we haven’t really gained anything.

So you can see that its not the abstraction of an interface covering Repository’s functionality that inverted our dependency, its the abstraction of an interface that stated Processors need.

Lets complete this example now with the second step and get Repository to implement the needs of Processor through the IRepository interface:

public class Repository : IRepository
    public void DoSomething()
        // Do something.

At this point you may want to point out that the code we have ended up with is absolutely no different than the code we would have ended up with if we had simply abstracted the functionality of Repository into IRepository – and you would be correct.  This shows that software design is more about planning for initial implementation and long term maintenance than it is about changing the code you write.  Few programmers like to admit this, but it is true.

The code may be the same but the design from a maintenance point of view is very different, lets take a look at both options

If we designed IRepository to be an abstraction of Repository:

  1. Any change needed to the public interface of Repository would need to be reflected in IRepository so the abstraction of Repository remains as per our design.
  2. Any change to IRepository (triggered by a change to the public interface of Repository) will still require a review/retest of Processor.
  3. We can make as many classes as we want dependent on the functionality of Repository through IRepository.

Therefore is IRepository is an abstraction of Repository we are still left with the same dependency/coupling from a Processor point of view as we had before the IRepository interface was introduced – only we now have more code to maintain.

If on the other hand, as we have done here, we designed IRepository an abstraction of the needs of Processor:

  1. Any change to the needs of IRepository by Processor will result in a change of IRepository.  This would only break the Processor class as it will no longer be fulfilling its promised functionality until we correct it.
  2. We now have a choice – we can implement the new needs of the IRepository interface into Repository or we can:
    • Implement the new functionality of IRepository into Repository so it can continue to meet the needs of Processor; or
    • Remove the IRepository interface from Repository’s definition, as it no longer meets the needs of Processor that owns the abstract interface, and provide a new concrete implementation of IRepository to meet the needs
  3. Other users of Repository (either directly or through their own interfaces expressing their individual needs) do not need to be updated or retested.

As you can see here, although the code in our example is the same, by inverting the dependency between Processor and Repository we have ensured that maintenance of Repository only affects Repository, and maintenance on Processor (and its associated IRepository abstraction) only have to affect Processor.  Way may choose to use other classes in the solving of the new need, but we do not have to, we are not dependent on doing so.  If all users of Repository employ this same principle of abstraction of needs, it also means we don’t have to worry about any maintenance we do choose to do on Repository to meet the new IRepository needs of Processor rippling on to any other classes Repository is indirectly used by as their own abstraction of needs protects them.

If at this point you are starting to worry about the number of different interfaces required to keep this inversion, let me jump in and say yes it is essential that you create some shared Standard Interfaces that act as “standards” for common needs classes may have. However done correctly, these standard interfaces will be tightly version controlled and built for a single purpose, and therefore if instead of defining IRepository for Processor we choose to use a standard abstraction of needs represented by an IStandardRepository interface; Processor still remains the sole class involved in any decision to stop using that standard interface if the needs of Processor changes such that the standard no longer meets Processor’s needs, in which case it will go back to defining its own IRepository to represents its needs and therefore all the benefits of the inverted design remain. Standardised abstractions are not owned by the class that needs it, or those that implement it, they are independent. However using standard abstractions that already represents a classes need does not introduce any new coupled dependencies and does not take away from the class the ability to change the definition of its needs should they change.

In our example here we have used the technique of an Interface (or abstract class in some languages) to achieve our design, but it is critical to understand that it is not the use of the technique that made our design comply with DIP. It was our choice to design to complied with DIP that mattered.  Once we made the choice we could use any technique that met those needs. Our design itself is also not dependent on the technique used to meet it.

What is Dependency Injection

This article is unlike most you will find on the matter of dependency injection as rather than jumping straight into the technique we have first made sure we have a clear understanding of the Dependency Inversion Principle. We can talk about dependency injection from the position of our need, rather than simply as a technique or tool for us to use.

Dependency Injection is a technique we can use to overcome the second dependency between Processor and Repository in the original code (I did promised I’d come back to it)- the fact that we directly instate a new Repository() within Processor.

We have three ways to solve this, each has its pro’s and con’s, a full discussion of which is outside the scope of this article; but which one you use will mostly depend on your team’s taste, and many times be restricted by limitations and style of any existing existing code-base.  I could point you to a long list of blog posts and articles discussing (with various degrees of objectivity) which is better than the other – personally I like a little more balance and to practice the principle above of deciding a design and letting any technique be used as long as it meets the design principles, rather than believing one solution fits all problems and designs.

Service Location

OK if I don’t say this now I’m sure many people will jump to tell me service location is not dependency injection, and I accept that some strong advocates of the two styles of dependency injection described below this section go so far as to hate Service Location as if it is the worst sin a developer could make – but despite these loud opinions – Service Location in very many cases is a viable and good choice for tasks like the one we are trying to achieve here.

We can use Service Location to remove the direct dependency “new Repository()” within Processor and replace it with a line that says “give me the best available (or configured choice) IRepository implementation please”.   Here is the code:

public class Processor
    private IRepository repository;

    public Processor()
        repository = ServiceLocator.Resolve<IRepository>();

    public void ProcessData()

        // Other tasks...

The implementation of ServiceLocator is outside the scope of this article, however the result of all implementations is the same – a class implementing IRepository will be returned from the Resolve() call. This means in the future if we want to use a different implementation of IRepository within Processor (or Repository stops implementing it) we can make that take place with an appropriate code change, or configuration change, to connect the IRepository interface to the new concrete implementation without having to modify Processor’s code.

One very valid argument raised about using a Service Locator is that the ServiceLocator will need to have been configured before we can use Processor but Processor does not advertise this fact in its public interface. This can cause unexpected errors if someone creates a new Processor() and is not aware of its implementation detail of using ServiceLocator. Many of you will already know my best practice principle about “always providing a default implementation of your interfaces” (which one day I’ll get round to writing an article on) and if you make sensible use of modern language features like reflection then there is no reason a ServiceLocator can’t be self configuring in these situations – putting its use back where it was intended by the author of Processor – as an implementation choice rather than a public dependency/requirement of the class. This argument against use of service locators is therefore only valid if we allow it to be in the code we write.

If there is enough interest I’ll create a new article with full code and description of a self-configuring ServiceLocator based on reflection as described above.  Get in touch with me via email, twitter, or in a comment here to let me know.

Property Based Dependency Injection

In a nutshell the other approach to Dependency Injection is the process of letting some of the dependencies of your class be passed in from the calling code rather than being instated directly within the class. One common method is to convert our private members variables into Properties to allow the injection to take place.  Again a full implementation is outside the scope of this article, however here is how Processor would look if we were using property based dependency injection.

public class Processor
    public IRepository Repository { get; set; }

    public Processor()

    public void ProcessData()

        // Other tasks...

Here you can see the constructor no longer initialises repository and instead it has become a part of the public interface of the class and it is expected to be set from outside the class before ProcessData() is called. e.g.:

var processor = new Processor();
processor.Repository = new Repository();

A very valid argument against this approach is that again the calling code has to know to set processor.Repository before calling ProcessData() or any other method that might use it. This again is exposing the implementation detail of the class to those who use it. You should also ask yourself carefully for each and every dependency you expose like this if you are comfortable making this dependency part of your public interface for the class or if it really should stay an implementation detail. The number one mistake I see when people use dependency injection without full thought for the Dependency Inversion Principle is that every class consumed in the implementation of Processor would become a dependency. This not only breaks any encapsulation the class may once have contained, but also means it becomes completely impractical handling dependency injection by hand, and a dependency injection framework will have to be used. Like service locators dependency injection frameworks also need to be configured before they are first used.

If you do decide that a particular dependency does belong as a public property – but don’t want to pass your problems of configuration onto the caller or a framework of their choice – can I suggest that you use a second technique, which might be service location, to instate your dependencies on first use if they haven’t been supplied to you. This can take your class back to a zero configuration position so long as your service locator is self configuring, while still giving users of your class the ability override your dependencies for extension or for mocking.

A simple implementation of a self-configuring and therefore optional dependency property would be:

public class Processor
    public Processor()

    public IRepository Repository
            if (m_repository == null) {
                m_repository = ServiceLocator.Resolve<IRepository>();

            return m_repository;
        set { m_repository = value; }

    private IRepository m_repository;

    public void ProcessData()

        // Other tasks...

Constructor Based Dependency Injection

The main difference between property based dependency injection and constructor based dependency injection is that in constructor based dependency injection all dependencies must be supplied when instating the class rather than one at a time in properties. This gives two benefits:

  1.  It is not possible to make the state of the class invalid due to dependencies not being set – as all dependencies must be set in the constructor and therefore every instance of the class will always have all of its dependencies satisfied.
  2. Because all dependencies are initialised at the start of the objects life, implementation details of which methods use which dependencies no longer has to be understood by anyone setting up the class for consumption.

These two reasons help to explain why constructor based dependency injection is the most popular method used by advocates of dependency injection and dependency injection frameworks.

Lets look at the code for using constructor based dependency injection for Processor.

public class Processor
    private IRepository repository;

    public Processor(IRepository repository)
        this.repository = repository;

    public void ProcessData()

        // Other tasks...

And here is the code to manually initialise it with its dependencies:

var processor = new Processor(
    repository: new Repository()

You can see here that the issue of forgetting to set dependencies is not a concern in the way it was for property based dependency injection. However the concerns about breaking encapsulation with the things you decide to add a public dependencies remain, and still need thinking about carefully. Common to both property and constructor based dependency injection as well is the fact that real object graphs get much more complex than in our example (what if Repository had some dependencies that needed to be injected?) and so a dependency injection framework quickly becomes essential to keep the code maintainable in the long term.

I would also repeat my advice of making sure there is a default implementation of each dependency to save the caller having to do manual configuration or having to use a dependency injection framework if they don’t want or need to.


You have now seen that the Dependency Inversion Principle is a design principle we take into our software designs and code that is focused on inverting dependencies by having classes depend on their needs and not on concrete or abstract classes/interfaces that other state of their capabilities which might meet our needs.

We can use techniques such as interfaces or abstract classes to help us abstract the needs of a class into an abstract dependency that can be met by any class that wants to. This breaks the functional dependency between a class and its detail.

We can use techniques such as dependency injection or service location to help us de-couple any particular implementation of an interface from the class that needs it. This breaks the dependency between a class and the detail used to fulfil its needs.

You will have also observed that simply using dependency injection does not in any way mean we are using dependency inversion, and if used wrongly, can start to damage encapsulation and restrict the environments our code can be reused within. Furthermore if we start with the premise of using a particular technique, such as dependency injection, our designs can end up dependent on the technique we chose, rather than the principle of real dependency inversion.

If we apply the Dependency Inversion Principle to our designs – we can then safely choose to use the most appropriate technique or pattern to meet the needs of that design, but if our design needs change, we will still be able to move to a different technique and always maintain our principle of dependency inversion.


The Open Source vs Commercial Development Myth

The two can Co-Exist

Looking around the internet you could believe that open source software development, and commercial software development, are opposing forces that can never meet or work well together.  All experienced software developers know this is simply untrue, and yet the myth seems to perpetuate anyway.

By being careful, and being keeping with the spirit of the freedoms represented by the open source community.  It is possible for open source developments to benefit commercial software, and commercial developments to benefit open source software.  In the almost two dacades that I’ve now been involved in software development both open source and commercial;  I have never found a conflict between the two ideals that couldn’t be solved in a way that helped everyone.

Understanding the Types of Open Source License

The first thing you need to understand is that not all open source licenses are equal.  There are primarily two kinds of open source: copyleft and permissive.  For commercial software development I find it easy to divide the copyleft licenses into “strong copyleft” and “weak copyleft”.  Let me cover the three categories briefly here:

Strong Copyleft

When a software system or library is placed under a Strong Copyleft licenses the author is indicating two things:

  1. He wants others to be able to use the code he has produced.
  2. He feels the code produced has value enough to ask others to share their own changes to the code and share systems using the code under the same terms.

The most prominent Strong Copyleft licenses are: GPLv2 and GPLv3.

Strong Copyleft licenses are sometimes described as having a “viral” affect.  This is due to the fact that any derived work has to be distributed under the same license terms.  In the spirit of the license a derived work is usually intended to include software that references or links to Strong Copyleft software libraries as well as altered versions of the original.  This means for example you can only use GPL code in your commercial application, if you are happy to now distribute your commercial application under the terms of the GPL.

Personally I would describe Strong Copyleft licenses as the least friendly for commercial software development and many commercial development companies have to avoid it to meet the IP desires of customers and business owners wanting software developed.

Weak Copyleft

When a software system or library is placed under a Weak Copyleft licenses the author is indicating three things:

  1. He wants others to be able to use the code he has produced.
  2. He feels the code produced has value enough to ask others to share their own changes to the code under the same terms.
  3. He is happy for the library to be used in closed-source products.

Weak Copyleft licenses are most often used for libraries.  The author wants bug fixes and enhancements added back to the library so it can continue to improve, but doesn’t care about how you license the software that utilises the library.

The most prominent Weak Copyleft licenses are LGPLv2.1, LGPLv3, MPL, and MS-PL.

Weak Copyleft licenses are mostly about encouraging you to share fixes and enhancements to core functionality and keeping those fixes and enhancements “free”.  It does not generally have the same “viral” affect as strong copyleft licenses because software that references or links to weak copyleft software libraries as not intended to be considered derived works.  This means you can use LGPL code in your commercial application, and distribute your main software under any license you want, however any changes you make to the copyleft library must be distributed under the original copyleft license.

Personally I would describe Weak Copyleft licenses as very friendly to commercial software development.  It encourages you to get benefit from somebody else’s effort, and simply asks for improvements to be shared in return.


When a software system or library is placed under a permissive licenses the author is indicating three things:

  1. He wants others to be able to use the code he has produced.
  2. He’d like some credit for the work he has put in.
  3. He is happy for the code to be used in any future open source or closed source application.

The most prominent permissive licenses are BSD, MIT, and Apache.

Permissive licenses are about preventing wasted effort while people to reinvent the wheel.  The functionality of the code is shared freely for open source or commercial use.  There is no “viral” affect, and you do not need to contribute changes back to the original author or project team, although this doesn’t mean you shouldn’t.

Personally I would describe Permissive licenses as very friendly to commercial software development.  It encourages you to get benefit from somebody else’s existing effort rather than wasting time reproducing the same functionality.  In exchange usually all that is asked for is acknowledgement of the codes origins.

Keeping the Spirit of the Original License Choice

Its important to understand that open source licenses work within the restrictions of the existing copyright framework.  This differs slightly country to country, and generally only forms contracts between people code or software is “distributed” to.  If you are not careful you can get caught up in questions of “can I do this with license X?” or comments “I don’t have to do that because license Y says you are not entitled to it”.  This is not what the open source community is about.  I believe that as well as keeping to the letter of the licenses, you should honour the spirit of the license regardless of the “additional rights” awarded by copyright laws in your country.

If you take a GPL library you want to use, and try and come up with ways to use the software “indirectly” to avoid putting your own code under the GPL, stop and think again.  The code you want to use was shared by the author because he wanted to see enhancements to it shared too.  You may not want to share your innovations., you may not like the GPL.  But that was the intention of the author and you should honour it or choose not to use the GPL code.

Likewise don’t think that because you upload copyleft software you developed to a web server you didn’t “distribute” it so you can keep the source closed.  And don’t create a “wrapper library” around an LGPL library where you add your new functionality so you don’t have to share it.  This isn’t the intention of the author when they let you use their code, and you shouldn’t abuse their intentions for your own gain.

If a product is dual licensed under copyleft and non-copyleft licenses, don’t constantly push the boundaries of what you can do with the copyleft version, contribute to the funding of the library or software by buying a license so it can continue to improve.

Giving Back

There are many ways to give back to the open source community as an commercial software developer.  Some are really simple but under commercial pressures are often ignored.

1. If you fix a bug in a library, no matter what license its under, submit the change back to the author or project team.

2. If you make minor enhancements that are generically useful, submit them back to the author or project team.

3. Don’t contribute incomplete changes or changes specific for your needs only back to the author or project team. These changes usually carry no reusable value and can waste the project’s time tidying them up for inclusion.

4. If you create a library or tool because you couldn’t find the one you needed in the market, and its not something you are wanting to commercialise on its own, make it available under an open source license you are comfortable with so others don’t have to repeat your effort.

5. If you create a useful program, library, or tool; consider duel licensing a “community” edition under a copyleft license.  Yes its true some developers and organisations won’t honour the license, but generally those people would have broken your commercial license terms if they had a chance too.  This dual licensing can be particularly useful for libraries as it can attract the attention students and others keen to learn your libraries or software, and this can in time become a major source of paid license users in the future.

6. Even if you are working on an open source project, don’t change the license of code from permissive to copyleft by adding enhancements and fixes under your project’s copyleft license rather than the original permissive license.  This practice has long been a point of contention between advocates of permissive licenses and those using their code in copyleft projects.  The overall project can still be copyleft, while honouring the spirit of the license for permissive code you use and giving back under the same license.

Commit Pitfalls

Here are a few of the pitfalls that can happen if you’re not careful combining open source software with commercial software products.

1. When you use an open source library don’t assume the author will carry on enhancing it in the future, and don’t assume someone else will fix the bugs.  Many open source projects are maintained in people spare time, and so every day some projects go stale as project teams move away, or change direction.  If you use an open source library in your commercial product, remember that you must plan to be able to maintain it in the future.

2. Always check the license before using an open source library.  Is it compatible with the IP requirements of you and your customers?  Does it have an explicit non-commercial use clause?  Its even worth checking this if you think you “know” the license the project is under.  Its not uncommon for exceptions or additional clauses to be added on top of standard licenses.

3. Don’t expect you can open source a dying product to “hand over” maintenance of it a community.  Be aware that the original author or team of an open source product, always end up the major contributors to that project long term.

4. Don’t assume because you didn’t pay for something, its cost is zero.  You still need to integrate the software or library into your solution, and you still need to train developers on its code base so you can maintain it within your SLAs.

5. Don’t dismiss a GPL library until you’ve checked the license.  Some projects prefer the use of the GPL over the LGPL but add explicit exceptions for the linking of close-source software.


Open source and commercial software can benefit each other, even if their license terms sometimes seem contradictory.  By following the license requirements, and keeping with the spirit that caused the author to open source their software in the first place, the open source projects can benefit from bug fixes and enhancements from commercial users of their software.

Closed source commercial software can benefit by using stable versions of open source libraries reducing development times and potentially delivering a richer experience.

Both open source and commercial users benefit from a dual licensing scheme where it is appropriate, with the larger user base providing a useful support and learning network, as well as a pattern of many active open source users progressing to paid services in the future.

The small print: I’m not a lawyer so please take this article as me sharing my experience as an open source and commercial software developer rather than legal advice about licenses and copyright law.

C# Code Guidelines

Coding Guidelines

Every development team or software development team need guidelines to follow to help them write consistent code that keeps maintenance costs low, and development productivity and code reuse high.

I recently updated the ones I use with my team, and though I’d share them to save others having to create their own.  Feel free to reuse in part or full for yourself or your team, and leave a comment with any general suggestions.  Style can be a very personal thing, so don’t be afraid to adapt them to meet your teams own preferences, the key is that all the team follow the same convention, and wherever possible that convention matches the underlying framework you are using.

Naming Conventions


Use PascalCasing for namespaces, types, and member names.

Use camalCasing for local variables, parameters, and user interface fields.

Use camalCasing with an “m_” prefix for private non-user interface fields.

In 3rd Party Templates or code that uses a “_” prefix within a class rather than a “m_” prefix then be consistant within the class and either rename all to “m_” or continue with the “_” prefix.

Type Notations in Variable Names

Hugarian Notion must not be used – When the type is an important part of the variable’s purpose include it in the name by prefixing or postfixing it to the name without abbreviation.

Do not use abbreviations – If an acronym is widely accepted and used in the framework or toolkit being referenced then it acronym may be used despite the rule to avoid abbreviations.

.NET Word Conventions

Following .NET conventions for the words:

  1. Indexes (not Indices)
  2. UserName (not Username)

Use the following symmetric words when defining pairs of functionality:

  1. Add / Remove
  2. Insert / Delete
  3. Create / Destroy
  4. Initialize / Finalize
  5. Get / Set
  6. LogOn / LogOff
  7. Begin / End
  8. Register / Unregister

Use the following American words rather than the British words for member names:

  1. Color
  2. Initialize

Use of Plural and Singular Words

Name all classes with a singular word or phrase.

Name all collections or with plural words or phrases.

Name all namespaces as plural words or phrases unless the namespace has a special meaning in the MVC framework.

Compatibility with COM and other CLR Languages

Do not name two public or protected members with names that are the same excluding character case.  This would prevent reuse of the classes in languages such as VB.NET.

When an argument for a class is passed in to a method or constructor with the purpose of setting the public member, then use of the same name in camelCase rather than PascalCase is recommend over prefixes or suffixes.


Name all static singletons that initialise themselves “Default” unless there is a specific reason to use another name.

Name all static singletons that do not self-initialise “Current” unless there is a specific reason to use another name.

Simple Names

The following simple names are allowed for the uses specified:

  1. obj – to store a generic object who’s type is unimportant.
  2. i, j, k – for control of loops.
  3. s – to store a string value of a variable already in scope in a different type.
  4. e, ea, ee – for subclasses of EventArgs.
  5. e, ex – for subclasses of Exception.
  6. item, it – for the control variable in LINQ query expressions.

Use of Types in Variable Declarations

Use the most specific type available in a variable declaration that is not initialised in-line.

Use var for types that are initialised in-line.

Do not use var for types that are initialised in-line to values of a standard type (e.g. int, string, decimal etc.).

Use var for variables that are initialised as the result of method calls that return collections.

Use var for variables that are storing anonymous types.

Use var when the exact type of a variables is unimportant to the method as more than a return value from an invocation.

Use object for types that are initialised as the result of method calls where the return type is going to be worked with only via reflection.

Use interfaces for variable types if the work being done is dependent only on the interface.

Use dynamic as a variable type only if the type could not be known at compile time, and the code is not going to reflect on the type.


Constructor Performance

Avoid use of database connections in constructors.

Avoid use of network calls in constructors.

User Interface Design Time Requirements

Constructors for User Interface classes must be design time compatible.

Member Initialisation

Do not add constructor overloads that perform member initialisation as part of its parameters, these should now be handled by the member initialisation syntax.

Factory and Dependency Injection

Provide a zero parameter constructor unless the class is entirely unusable without a parameter.

Ensure the zero parameter constructor is suitable for use in a class factory or service locator.

Ensure the zero parameter constructor is suitable for use in dependency injection.

Ensure constructors do not over initialise to prevent subclasses from changing implementation details.

Virtual Calls

Avoid calling virtual members from constructors as the behaviour is unpredictable.

Static Constructors


Avoid initialising static members within static constructors.

Wherever possible initialise static members on first use.

Asynchronous Code

Use the await and async keywords when creating asynchronous code.

Prefix “Async” onto all methods that need to be awaited.

Do not prefix “Async” onto any method that cannot be awaited, even if the method contains asynchronous code.

When blocking methods are part of a Portable Class Library and cannot use the await and async keywords, use extension methods with the “Async” prefix to provide asynchronous alternatives for platforms that support it.

When to use Properties, Methods, and Extension Methods


Use a property if:

  1. The functionality behaves like a field.
  2. Is a logical attribute of the type.
  3. Is unlikely to throw exceptions.
  4. The contained code has minimum value when debugging.
  5. Does not have a dependency on the order being set.

Never use a property if:

  1. A get implementation is not provided.


Use a method if:

  1. The operation is a conversion.
  2. There is an observable side-effect from the call.
  3. The order of execution is important.
  4. The method may not return.
  5. The method may run code asynchronously.
  6. The result should be cached for reuse within the calling method for performance.

Do not use a method if:

  1. The return value is a collection that remains linked to the instance.

Extension Methods

Use an extension method if:

  1. The operation needs extend a sealed type.
  2. The operation needs to apply to anonymous types.
  3. The operation needs to apply to general IEnumerable types.
  4. Base functionality is provided for an Interface.

Do not use extension methods if:

  1. The behaviour may want to be specialised by a base class.

Place extension methods that form part of a classes or interfaces core API, or platform specific extensions to the core API, in the same namespace as the class, even if it is provided by another assembly.

Always place extension methods that extend core .NET types outside of the System.Collections namespace in a namespace ending in “.Extensions” to avoid littering the namespace of these core objects.

Method Arguments

Name method arguments with names that inform the caller of the intended purpose, not of its internal use.

When a Boolean argument is used as a flag to change fundamental functionality of a method, always call it using the “name: true” style.

Use Boolean arguments called with the “name: true” instead of two member enumerators for all methods, unless the method extends an API where an existing enumerator is better suited.


Catching Exceptions

Only catch exceptions of specific types if the error message being returned will be specialised for the type.

Catch the generic Exception type to isolate calls from the calling code.

A user must be informed of an exception with an error or warning message.

An error or warning message can be omitted if an exception is caught and ignored to avoid a known framework or platform issue and the issue is commented within the catch block.

Do not catch the general Exception type and hide its value from the user.

Throwing Exceptions

Use the existing Exception types to throw your own exceptions.

Only create Exception subclasses when they are to be used multiple times within a library.

LINQ, for, and foreach.

Use LINQ for queries to external data sources.

Use LINQ for queries over in memory data that is designed for use as a dictionary or database.

Use foreach () for iterating over any IEnumerable.

Use var as the item type for the foreach() loop whenever the IEnumerable implements IEnumerable<T>.

Use for() when the iteration is controlled by a numeric block.

Use for() to walk hierarchies.

Use for() or while() when the iteration is controlled by a non-numeric exit condition.

Use do { } while () only where it reduces code compared to a for() or while().


Member Comments

Every public member or class must be commented with an XML comment.

Every protected method and property must be commented.

The comment must be written to explain why somebody would want to invoke the code.

The comment must not be a substitute for a bad member name.

The comment should not attempt to list all exceptions that could be thrown but should exceptions of special Exception subclasses thrown directly by the code.

Do not comment private or internal methods that have obvious use.

Do not comment event handlers unless their implementation is non-obvious.

Ensure member comments are suitable for extraction into API documentation.

Do not repeat the XML comment for overridden methods if the comment has nothing new to add to the base type comment.

Do not repeat the XML comment for the implementation of a member for an interface if the comment has nothing new to add to the interface member comment.

Code Block Comments

Comment code with headers within blocks that perform multiple tasks discrete as part of its implementation.

Comment code blocks that cannot be understood by the code alone.

Comment code wherever a special or magic value is used.

Comment code when a condition in a control block could be misunderstood.

Comment Styles

Use /// Comments when commenting members or classes.

Use // Comments when commenting code blocks or statements

Use /* */ comments only if the comment has to be placed in the middle of a line of code or within a Razor syntax document.

Special Developer Comments

Always mark comments that are reminders of code to fix with “TODO” in upper case followed by a message.  This allows all TODO items to be found and completed or before the release of any solution.

If the last statement of a non-void method is not a return statement, place the lint style comment “/* NOTREACHED */ at the bottom of the method so developers know to maintain this when editing the code.

Braces and Brackets

Brace Positioning

Place opening braces for classes, namespaces, and members on new lines.

Place opening braces for code blocks such as if, for, foreach, while, do, and switch on the same line as the code control statement.

Place all closing braces on new lines.

Place both the opening and closing braces for automatic properties on the same line as the property name e.g. public int MyProperty { get; set; }

Place both the opening and closing braces for anonymous types used as property bags on the same line e.g. Html.Link(“Test”, new { this = “that” })

Place else and else if statements on the same line as the closing brace and keep the next opening brace on the same line too.

Bracket Positioning

Place a space between keywords such as if, for, foreach, and while, and the opening bracket.

Do not place a space between a method name and its opening bracket.

Keep the closing bracket on the same line as the opening bracket in the argument list is small.

If the argument list for a method call is long: keep the opening bracket on the same line as the method name, place each argument on a separate line ending with a comma, and place the closing bracket on a new line.

17 Years of Porting Software… Finally Solved

A History of Porting Software

I’ve been involved in creating and maintaining commercial and open source software for as long as I can remember, reaching back to 1996 when the world wide web was in its infancy, and Java wasn’t even a year old.

I was attracted to the NetBSD project because of its focus on having its software run on as many hardware platforms as possible.  Its slogan was, and remains “Of course it runs NetBSD”.

Although the NetBSD team worked tirelessly for its operating system to work across every imaginable hardware platform; much of the new open-source software development was talking place in for the i386 focused GNU/Linux operating system, not to mention the huge volume of Windows-only software that Wine tried, and mostly failed, to make available to people on non-Windows operating systems.

Advocates of cross-platform software like me were then constantly choosing between recreating or porting this software depending on its licenses terms and source availability just so we can use it on our platform of choice.

Some of the early of my open source contributions that are still available to download demonstrate this really well such as: adding NetBSD/OpenBSD support to Afterstep asmem in early 2000.  Or allowing CDs to be turned into MP3s on *BSD platforms with MP3c in the same year.

In 2002 when the large and ambitious KDE and GNOME desktops started to dominate the Linux desktop environments, I worked on changes to the 72 separate packages needed bring GNOME 2 and to NetBSD and became the primary package maintainer for a number of years.

As an early adopter of C# and the Microsoft .NET Framework I also worked through 2002 and 2003 to make early versions of the Mono project execute C# code to FreeBSD, NetBSD, and OpenBSD too.

The #ifdef solution

How was software ported between platforms back in those days?  Well to be honest, we cheated.

We would find the parts of the code that were platform specific and add #ifdef and #ifndef statements around them with conditions instructing the compiler to compile, or omit, different sections of code depending on the target platform.

Here is an example of read_mem.c from asmem release 1.6:

 * Copyright (c) 1999  Albert Dorofeev <>
 * For the updates see
 * This software is distributed under GPL. For details see LICENSE file.

/* kvm/uvm use (BSD port) code:
 * Copyright (c) 2000  Scott Aaron Bamford <>
 * BSD additions for for this code are licensed BSD style.
 * All other code and the project as a whole is under the GPL.
 * For details see LICENSE.
 * BSD systems dont have /proc/meminfo. it is still posible to get the disired
 * information from the uvm/kvm functions. Linux machines shouldn't have
 * <uvm/vum_extern.h> so should use the /proc/meminfo way. BSD machines (NetBSD
 * i use, but maybe others?) dont have /proc/meminfo so we instead get our info
 * using kvm/uvm.

#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>

#include "state.h"

#include "config.h"

/* sab - 2000/01/21
 * this should only happen on *BSD and will use the BSD kvm/uvm interface
 * instead of /proc/meminfo
#include <sys/types.h>
#include <sys/param.h>
#include <sys/sysctl.h>

#include <uvm/uvm_extern.h>
#endif /* HAVE_UVM_UVM_EXTERN_H */

extern struct asmem_state state;

#define BUFFER_LENGTH 400
int fd;
char buf[BUFFER_LENGTH];
#endif /* !HAVE_UVM_UVM_EXTERN */

void error_handle( int place, const char * message )
	int error_num;
	error_num = errno;
	/* if that was an interrupt - quit quietly */
	if (error_num == EINTR) {
		printf("asmem: Interrupted.\n");
	switch ( place )
	case 1: /* opening the /proc/meminfo file */
		switch (error_num)
		case ENOENT :
			printf("asmem: The file %s does not exist. "
			"Weird system it is.\n", state.proc_mem_filename);
		case EACCES :
			printf("asmem: You do not have permissions "
			"to read %s\n", state.proc_mem_filename);
		default :
			printf("asmem: cannot open %s. Error %d: %s\n",
				state.proc_mem_filename, errno,
	default: /* catchall for the rest */
		printf("asmem: %s: Error %d: %s\n",
			message, errno, sys_errlist[errno]);

#ifdef DEBUG
/* sab - 2000/01/21
 * Moved there here so it can be used in both BSD style and /proc/meminfo style
 * without repeating code and alowing us to keep the two main functions seperate
#define verb_debug() { \
       printf("+- Total : %ld, used : %ld, free : %ld \n", \
             , \
       printf("|  Shared : %ld, buffers : %ld, cached : %ld \n",\
       printf("+- Swap total : %ld, used : %ld, free : %ld \n",\
#define verb_debug()
#endif /* DEBUG */

/* using kvm/uvm (BSD systems) ... */

#define pagetok(size) ((size) << pageshift)

int read_meminfo()
      int pagesize, pageshift;
      int mib[2];
      size_t usize;
      struct uvmexp uvm_exp;

      /* get the info */
      mib[0] = CTL_VM;
      mib[1] = VM_UVMEXP;
      usize = sizeof(uvm_exp);
      if (sysctl(mib, 2, &uvm_exp, &usize, NULL, 0) < 0) {
        fprintf(stderr, "asmem: sysctl uvm_exp failed: %s\n",
          return -1;

      /* setup pageshift */
      pagesize = uvm_exp.pagesize;
      pageshift = 0;
      while (pagesize > 1)
              pagesize >>= 1;

      /* update state */ =  pagetok(uvm_exp.npages);
      state.fresh.used = pagetok(; = pagetok(;
      state.fresh.shared = 0;  /* dont know how to get these */
      state.fresh.buffers = 0;
      state.fresh.cached = 0;
      state.fresh.swap_total =  pagetok(uvm_exp.swpages);
      state.fresh.swap_used = pagetok(uvm_exp.swpginuse);
      state.fresh.swap_free = pagetok(uvm_exp.swpages-uvm_exp.swpginuse);
      return 0;

/* default /proc/meminfo (Linux) method ... */

int read_meminfo()
	int result;
	result = lseek(fd, 0, SEEK_SET);
	if ( result < 0 ) {
		error_handle(2, "seek");
		return -1;
	result = read(fd, buf, sizeof buf);
	case 0 : /* Huh? End of file? Pretend this did not happen... */
	case -1 :
		error_handle(2, "read");
		return -1;
	default :
	buf[result-1] = 0;
	result = sscanf(buf, "%*[^\n]%*s %ld %ld %ld %ld %ld %ld\n%*s %ld %ld %ld",
	case 0 :
	case -1 :
		printf("asmem: invalid input character while "
			"reading %s\n", state.proc_mem_filename);
		return -1;
	return 0;

#endif /* (else) HAVE_UVM_UVM_EXTERN_H */

int open_meminfo()
	int result;
	if ((fd = open(state.proc_mem_filename, O_RDONLY)) == -1) {
		error_handle(1, "");
		return -1;
#endif /* !HAVE_UVM_UVM_EXTERN_H */
	return 0;

int close_meminfo()
#endif /* !HAVE_UVM_UVM_EXTERN_H */
	return 0;

It wasn’t neat.  It increased code complexity and maintenance costs, but it worked.  And we all accepted it as the best we had for now.

Hopes of a Brave New World

Like many cross-platform advocates, I had big hopes for Java and C# with the Microsoft .NET Platform.  But sadly we never saw the fulfilment of their “platform independent” coding promises.  Too many times we have to choose between a GUI toolkit for a platform and looking out of place.  Other times we had to P/Invoke to native APIs to get at functionality not exposed or reproduced by the frameworks.  Even now the GUI toolkit Gtk# is recommended over standard Windows’ System.Windows.Forms on Mono when creating C# programs for Linux or *BSD.

Cross Platform Toolkits such as SWING for Java and Qt for C++ sprung up to abstract the user from the platform they were working with.  But they were primarily GUI toolkits and their APIs only went so far, and eventually, like it or not, all but the most simple applications ended up with a native API call or two wrapped in an #ifdef style condition.

How Web Development Made it Worse

With the rapid increase in Web Development many saw this as finally the way to deliver software across multiple platforms.  Users accessed software via a web browser such as Netscape Navigator and didn’t need the code to work on their own PC or operating system.

Of course behind the scenes the CGI programs were still platform specific or littered with #ifdef statements if they needed to work on more than one server OS.  But the experience of the end user was protected from this, and it looked like a solution may be in the pipeline.

But then the Netscape vs Internet Explorer browser wars happened.  Browsers competed for market share by exposing incompatible features and having sites marked as “recommended for Netscape” or “works best in IE”.  People wanting to support multiple browsers started having to litter their code with the JavaScript equivalents of #ifdef statements.  The same happened again CSS as it became popular.  Nothing really changed.

Enter the Mobile

Then along came the iPhone, and made a bad situation even worse.

Those who went for a native implementation had to learn the rarely used Object-C language.  This helped Apple to avoid competition as developers scrambled to be part of the mobile revolution, but deliberately made portability harder rather than easier.  That still remains part of their strategy today.

People turning again to the web for solutions found that accessing web sites carefully formatted to look great on 1024×768 screens, now being viewed on a tiny mobile phone screen in portrait orientation – was ugly at best, but more often unusable!  And it wasn’t just about text size.  Touch and other mobile specific service meant users expected a different way of interacting with their applications, and browser based software felt more out of place than ever. Yes Responsive Web design and HTML 5 go a long way towards solving some of these web specific mobile issues, but it doesn’t take us away from the #ifdef style logic that has become an accepted part of web application development as it did C and C++ development before it.

So What is to be Done?

Most of this article has been about a history of failures to tackle cross-platform software head on.  Each attempt did bring us a little closer to a solution, but throughout we resigned ourselves to the fact that #ifdef style code was still ultimately necessary.

As application designers and developers we had to choose between how native our applications felt and limiting users from using our software in situations we didn’t plan for.

For almost two decades I’ve been involved in trying to overcome this cross-platform problem.  Now the landscape is more complicated than ever.  Can the same software really run without compromise both inside and outside the browser?  Can we really have a native look and feel to an application on a mobile, tablet, and as desktop PC, is wearable computing going to be the next spanner in the works?

All this is why to move forward, we went back to basics.  We thought first about how software was designed, rather than the libraries and languages that we used.  We first made the Mvpc design pattern, and only then did we make there reference libraries and the commercial Ambidect Technology (soon to be known as Ambicore).  Its fair to say that our many years of experience led us to be able to finally learn from the past we had been so involved with, rather than allowing ourselves to repeat our mistakes again and again.

Because Ambicore provides access to the whole .NET Framework gives a complete cross-platform API that’s developers already know.  Use of C# as our first reference language gives us access to the great thinking that went into creating the Java JVM and Microsofts IL environments that really can abstract us from the operating system and help us avoide #ifdef statements.

Providing native GUI interfaces for each platform means applications using the platforms own recommended toolkit helps applications look and feel native everywhere – simply because they are native to each platform.

Providing a design pattern that works equally well in request-response stateless environments and in rich state-full environments allows us from day one to provide a browser based experience for those who want or need it, as well as a native rich client experience for those wanting to get more from their Windows PCs, phones, tablets, Macs, Linux, *BSD, or…

Its taken 17 years of personal involvement, and recognising and listening to visionaries in the industry.  But by standing on the shoulders of others we re-thought the problem, knowing #ifdef statements were as much part of problem as they were a solution.  We redesigned the development pattern to be portable by default, not as an after thought.  And we based our reference libraries on trusted platforms from market leaders such as Microsoft to make our technology available to the largest pool of developers possible in a language, framework, and IDE they already know.

We are stepping into a new chapter of software development where the platform and device is there to enable, not restrict, the end user from the software they want.  And just as we stood on the shoulders of giants to get here – we want you to join us in the new world too.

Google Glass – time to make your applications wearable

Wearable Technology

I’ve been having a bit of fun with Google Glass recently.  If you haven’t come across Google Glass before, I’d describe it as a pair of glasses you can wear that give you personal, voice controlled, simple computer.

Now I don’t personally believe Google Glass is a product that is going to go mass-market in the way the iPad did in defining a new style of device, but I think as a prototype it’s worth looking at to understand where wearable computing may go in the future.

As the market progresses we’ll start to put screens in contact lenses and tap into the existing connectivity and power of the smart phones we already in each of our pockets, and then somebody will come along in the same way Apple did with the iPhone and take a ignored “prototype” competition and create a device and accompanying marketing buzz that will make wearable computers the next big thing.

The Impact of Wearable Technology on BYOD

Right now many large companies, Universities, and other organisations are in the process of brining in their own their own BYOD (bring your own device) policies.

One often unconsidered side effect of the flexibility BYOD can bring is that companies adopting these polices are in effect giving up their control over when new devices, or new types of device, enter their workplace.  This may not matter too much if a new smartphone comes out running an alternative operating system like Sailfish, but what about the first time somebody walks in wearing Google Glass or a similar technology.  Do the policies cover use of its real-time recording equipment?  Will any of the required applications run on it?  Is there an environment where its voice operation can be used without affecting others?

Without preparation wearable computing may see the end of BYOD before it manages to reach its maturity.  To protect BYOD its therefore more important now than ever to watch these advances in technology during their infancy and be willing to prepare for the impact they may have in the future.

Wearable Applications

One of the most important, and difficult, areas for people adopting BYOD is making key software and applications available across all platforms.

I’ve already seen people leaving the mobile space because the costs of developing and maintaining separate fairly basic apps for the three major mobile platforms is too high.  Consider how much more costly this would be if we are talking about CRM or ERP systems, or even stock management?  What if wearable computing takes off in 2014, can your systems keep up?

Thanks to the Mvpc design pattern and the Ambidect Technology we use at Ambidect we are already tackling this problem and sharing our technology with others to encourage them to do the same.  By creating future proof applications we enable genuine BYOD environments to spring up everywhere without anyone or any device ever having to be labelled a second class citizen because a key application isn’t available for the platform or works poorly in the platforms web browser.

Google Glass and Mvpc

As part of my experiments with Google Glass I thought I’d be able to give the future proof part of our technology challenge, but I was actually surprised with the ease existing applications could be extended to work with wearable computing.

I started by going through the Google Glass API to get a feel for how applications should be built to feel “native” on the Glass.  Because of the way Google Glass works over the web I was able to reference all the existing .NET framework classes shared between ASP.NET and Windows Desktop as a starting point for the new platform.  This gave us a big head start.

Putting together an IPresenter and an IStartView that served up Timelines and provided menuItems[] allowed navigation through the Command layer within an hour of getting started.  From their it was pretty simple to implement the rest of the standard views and run up a demo project through the emulator a couple more hours and I had the whole demo working nicely in a read only way.

I then ran a couple of existing applications through the Glass to see how they worked, and all of them were usable, although extra attention would be needed to break down the amount of information on each page in the timeline, and thought given to how to best edit records, if we were considering using the Glass in a genuine production environment.

So Should I Get One?

Now I’m not expecting everybody to run out and get their own pair of Google Glass, but I do think they provide an interesting look into where the future of “personal” computers may be.. They also provided an exciting test for the Mvpc design pattern and libraries.  If nothing else the Glass would provide you with a bit of fun pretending to be the terminator for a few days, even if you should be using your business app we made run on it!

Flexible API Documentation with ApiDoc (

What is ApiDoc

ApiDoc is a tool for creating a set of technical API documents to help developers using your libraries or classes.

As a .NET developer you’ve probably used the MSDN references for various .NET classes on a regular basis.  But how do you make the same style of documentation available to your team or users of your libraries from other companies?

This article shows you how to use ApiDoc to generate MSDN style API documentation for your own classes, in a way that is easy to customise, integrate with existing websites, and can have your own branding applied.

Getting ApiDoc

To get started you need to download ApiDoc.  You can do this from codeplex:

Codeplex Download ApiDoc

The download includes full source code for ApiDoc and a fully functional demo website you can use and customise.  We’ll use this demo website in this article so be sure to download and extract the .zip file to a location you can edit it.

In the future if you want to you could add ApiDoc to your existing ASP.NET site or application via NuGet instead (see ApiDoc and ApiDoc.Mvc4).

NuGet ApiDoc

Getting Started

Once you’ve downloaded and extracted the .zip file open the “ApiDoc.sln” in Visual Studio.

ApiDoc - open solution

To run the demo site make sure the ApiMvcApplication is set as the start-up application and press F5.

Demo Site

By default the site is configured to show the documentation for the ApiDoc library itself.  Have a browse around and you will be able to browse through around the documentation jumping from class to class as easy as you can with the MSDN.

Documenting your own Code

The next step is to get the demo site working with your own code.  Close the web browser and stop debugging the program so you return to Visual Studio.

Under the ApiMvcApplication expand the Content folder and you will find an “AssembliesToDocument” folder:


Using explorer to find the .dll files and associated .xml files for your application and drag and drop them into AssembliesToDocument so they show in the list.

Now open HomeController.cs under Controllers and you’ll see the following:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace ApiMvcApplication.Controllers
    public class HomeController : Controller
        public ActionResult Index()
            return RedirectToAction("Namespace", "ApiDocumentation", new { id = "ApiDoc" });

The important part here is line 13. You can modify it to match the namespace you want to be your start page by changing id = “ApiDoc” to your own namespace e.g.:

            return RedirectToAction("Namespace", "ApiDocumentation", new { id = "MyApplication" });

You may prefer to open to the index of an assembly:

            return RedirectToAction("Assembly", "ApiDocumentation", new { id = "MyClassLibrary1" });

or directly to your main class:

            return RedirectToAction("Type", "ApiDocumentation", new { id = "MyApplication.MyMainClass" });

Hit F5 again and this time you will see the documentation for your own code rather than for ApiDoc itself.  You can navigate this code exactly as you could before.

Customising the Web Pages

The demo site is a standard ASP.NET MVC 4 site using C# and the razor syntax.  This means if you know how to create standard web applications in this environment customising the site to meet your needs will be very easy.

The views can be found in the default place under Views/ApiDocumentation.  Here you can change the order of documented items, apply custom scripts, or apply your own logic to the API or documentation as you display it.

To customise the style of the site you can edit “Content/Site.css”.  The site uses the standard classes from the MVC project templates, but you can customise specific divs in the views using their member names e.g. “description”, “classes”, or “properties”.

The ApiDocumentation controller can be found under “Controllers/ApiDocumentation.cs” and has a separate action for each member type making it easy to customise or extend.

To make the demo site look nice I’ve used the SyntaxHighlighter scripts to format the code syntax examples in the same way code samples are highlighted on this blog.  If you are customising the demo site for your own purposes you just need to reference the code within a pre block as follows to have the syntax highlighting take place:

    <pre class="brush: csharp">

You should now know everything you need to know to document your own libraries in your own style. If you’d like a link to your library documentation added to the ApiDoc project on Codeplex drop me an email with the link and brief description so I can add it to the site.

Getting Involved

ApiDoc is available under a BSD style license for everyone who needs to generate custom Api Documentation. Its currently Beta software and may contain a few issues which should be pretty easy to sort out and will be fixed as new versions of the library are released. If you want to get involved with the project itself please visit the Codeplex page where all feedback, bug reports, and suggestions are welcome through the discussion pages, and anyone wanting to contribute directly to the project will be invited to do so.


If like me you’ve worked with C# and the .NET framework for years then you will probably have written variations on following code hundreds of time when trying to display values on screen or save values into text based files or SQL statements:

object rawValue = SomeMethodCall();
string displayValue = String.Empty;
if (rawValue != null) {
    displayValue = rawValue.ToString();

The code itself is simple enough to get right first time, and easy to read and understand, but after you’ve written it a few dozen times it starts to appear like unwelcome rash across your code. A programmers next natural instinct is to see if there is a way to shorten the code.

Unfortunately the ?? operator can’t help us here unless we are exclusively dealing with strings for rawValue (in which case why are you calling ToString()?). We can however shorten things substantially with the ? operator:

object rawValue = SomeMethodCall();
string displayValue = (rawValue == null? String.Empty: rawValue.ToString());

This has helped and taken our code from five lines to two, and hasn’t noticeably affected readability. But if we are only interested in the displayValue woundn’t it be nicer if we were able to just do:

string displayValue = SomeMethodCall().ToString();

This code will execute fine, but as soon as a null object is returned from SomeMethodCall() we’ll get a NullReferenceException raised and if we didn’t see it in testing, our end users will see an unhandled exception we should never have introduced.

If we try to use the ? operator directly with the method call and the best you can get is:

string displayValue = SomeMethodCall() == null? String.Empty: SomeMethodCall();

You can tell immediately from the code that this would be at best wasteful if not potentially damaging depending on the side effects of SomeMethodCall(). Under normal circumstances, where SomeMethodCall() doesn’t return null, we end up executing SomeMethodCall() twice. If the method accesses a web service or database we will have potentially doubled the impact of the code on the server, and slowed down the user experience.

What can be done then? Should we just put up with the two line of code where one would work? Until recently I would have said yes, but with LINQ usage on the rise I’ve started to see this particular problem causing ugly code to be written for lambda statements, or worse developers knowingly being lazy with their handling of potential null values when calling .ToString()!

We can actually use a little used feature of extension methods to help us with this problem. As you will know extension methods allow us to invoke static utility methods in a syntax that mirrors invoking a member of a class. The compiler understands the code that calls an extension method and effectively re-writes the syntax from a member call to a static method call for us. To get an idea of how this works take a look at my previous post.

Because the member-like syntax is converted into a static method call, it is possible to call the member on a null reference.  Therefore the first line of the following code would throw a NullReferenceException, but if MyExtensionMethod() was an extension method that handled a null specially, the second will not.


Using this technique we are able to create extension methods that special case null values, but maintain the readable member-style syntax.

I’m going to throw a strong word of warning in here now. When you add an extension method that does not behave like a member method call, particularly ones that don’t raise a NullReferenceException when called on a null variable you are moving away from basics that programmers take for granted when reading code. Used incorrectly this can make code harder to understand and therefore harder to understand. You should be sure about what you are doing before you add extension methods that expose this non-standard behaviour. Its been my experience that methods that should follow this behaviour are almost always involved with displaying values as strings, or converting values between types to pass to an ORM or similar module.

My personal convention to make sure I can identify where the technique has been used is to suffix the method name with “Safe”, so the call above would be MyExtensionMethodSafe(). If everybody in the team follows this convention when they feel there is a genuine need for the extension method to treat nulls differently to a member method call then the code remains easy to read. Don’t forget however that even if you’ve adopted this convention throughout your team, you will still have to train new people joining the team on the convention.

Now with that warning having been strongly stated, lets return to looking at the problem at hand. In this case I believe it makes very good sense to provide a “Safe” extension method companion to ToString(). Here is the code for the ToStringSafe() extension method in full:

namespace Mvpc.Extensions
    public static class ObjectExtensions_ToStringSafe
        public static string ToStringSafe(this object value)
            // Nulls just return empty strings.
            if (value == null) {
                return String.Empty;

            return value.ToString();

If you read my post on async extension methods you will already know that I recommend placing any extension method that works on string, object, int, or any of the core types of the .NET Framework under a namespace ending in “Extensions” so they don’t litter the initilisense when the user doesn’t need them. This rule applies here too as the majority of code will not want to use the ToStringSafe() method.

After adding a using for the Mvpc.Extensions namespace it finally becomes simple to write:

string displayValue = SomeMethodCall().ToStringSafe();

We finally get our five lines of oft-duplicated code down to a single readable line.

Since introducing this new method I’ve completely stopped seeing programmers being lazy with their .ToString() handling of nulls inside lambda statements. Hopefully you will see the same too as well as being able to produce more readable null-safe code.