Category Archives: Tips

Don’t let MAJOR version number worries stop you using Semantic Versioning (Semver)

Why Should I Use Semantic Versioning?

Semantic Versioning (semver) is specification for version numbers of software libraries and similar dependencies.

Its rules are not new, and are similar to how most library version numbers have been managed for years. However the idea of semver is that if libraries use exactly the same rules around version numbers then developers using libraries can know about library changes and compatibility direction from the version number.

The basic details are:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. 1.MAJOR version when you make incompatible API changes,
  2. 2.MINOR version when you add functionality in a backwards-compatible manner, and
  3. 3.PATCH version when you make backwards-compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

You can read the full specification at

In recent years semver has become very widely used, and in some spaces, expected. If you and the users of your library want to benefit from using a widely recognised version numbering system, I would recommend using semver above other system available due primarily to its significantly wide adoption at present.

Although semver will be very similar to the system you are using at present, as you apply its rules strictly you will probably find you made more backwards incompatible changes than you realised. It’s also likely that in the past you used the MAJOR version number as a marketing version number rather than strictly as an API incompatibility indicator. You’ll probably then find that your historical version number and methods has found its way outside of the pure library space to – with packages and even completed software applications matching version numbers to libraries.

So What are the Major Issues to Expect when Adopting Semver

Most the major issue people experience switching from their old versioning system to semver involve the major version number and API incompatablity.

Here are a few of the most common along with my recommendations on how you can avoid the issue or changes your processes to minimise its impact.

Our major version number is used for marketing. We want people to know that version 3.x.x is a very big improvement over 2.x.x

This is an important one as in many cases if semver is accepted and no other policies are changed, you could end up hitting version 42.x.x before you’ve had time to stop it. At that point you’ll have a very awkward choice to make to get your major version under control, and probably a lot of commercial pressure to “roll back” the number and start using the major version number for marketing again.

As you reflect on your own internal policies you may be tempted to come up with a “semver like” versioning system along the lines of MARKETING.MAJOR.MINOR.PATCH. Don’t do it. While I agree that style of numbering achieves what you want, you lose all the benefits of adopting semver if you don’t adopt it exactly and strictly. Remember your library was already using something like semver before you even heard of semver.

Our library is used in a few applications but we really like the freedom of breaking our APIs as we need to right now, so we won’t call out library 1.0.0 yet, well stay at 0.x.x.

It can be tempting to allow yourself freedom for as long as possible with a 0.x.x version. Maybe your library “is mostly used internally” or maybe “we’ll make it version 1.0.0 when we know the timeline for 2.x.x”.

The truth is the benefits of semver only really kick in after you reach the magic 1.0.0 version number. We can all recall open source projects that still sport 0.9.x version numbers after more than 10 years of development, because people haven’t yet got to a point where everything is perfect.

Remember that the version 1.0.0 is really for users of your library, not for you as the creator. Once you hit version 1.0.0 you are telling people that they should use your library. It functions as it should. They can benefit from it. They can use it in production.

If your library isn’t at that stage yet, then yes stick to a 0.x.x scheme, or a 1.0.0-alphax scheme, but in my view “release early, release often” isn’t about releasing incomplete software before it’s useful, it’s about release software as soon as its useful, and allowing its users to be involved in its evolution. This is nowhere more true than in library development.

We just released version 3.0.0 but version 2.32.0 actually had all the new developments in it. All we did for 3.0.0 was remove deprecated classes.

Unless you use completely separate branches for 3.x.x. and 2.x.x and a long term internal or public 3.0.0-alphax scheme for your next major release, then yes the x.0.0 releases can feel a lot less exciting than they used to.

I said before that the 1.0.0 release is for your libraries users, not for you. Thankfully 2.0.0 and later x.0.0 release are actually better for you than your users. For your users the version changes clearly warns them of incompatibilities that they may now need to work with. For you it represents the liberation from having to maintain and continually consider your depreciated APIs in everything you do. You took the worries of “how will it actually work in the wild” with you previous 2.32.0 release. So sit back and enjoy the x.0.0 releases.

I want to follow semver strictly, but how do I define an incompatible API change?

There are two general ways to define an incompatible API change (sometimes called a breaking change):

  1. Binary Breakage
  2. Source Breakage

Which one of these you consider appropriate for the management of your API depends on how you make your library available to your user. For example, if your library is managed through installation into a system wide library area (e.g. the Global Assembly Cache (GAC) for .NET or /usr/lib for *NIX) then you should use binary breakage because your distribution method allows post-compile substitution of the library into compiled applications.

More commonly now libraries are designed to be shipped as part of a bundle for a specific application (ClickOnce installs on Windows, ipk files on iOS, .apk files on Android, etc.) this means that an individual application does not have the ability (or at least the design intent) for its libraries to be post-compile substituted. So in this case you should use source breakage to define your API computability. On .NET many libraries now use Nuget to distribute the libraries, this is very much in line with the bundling of libraries with software, and so if your library is designed to be managed through Nuget you should consider Source breakage as the best measure of incompatibility.

A good discussion on some of the ways your API changes can cause source/binary breakage in .NET can be stackoverflow here:

At a minimum you must consider source breakage, and therefore increase your major version number, if any application using your library will require any source code changes before it can recompile after updating from the previous version of the library. Changes in behaviour of a call from one version to the next cannot all be covered (otherwise in theory every bug fix changes behaviour so may be undesirable to someone which would mean increasing the major version number for just about every patch change) so use your common sense. If any current appropriate use of your API will behave worse rather than better after an update, I would consider that to be a breaking change.

I can manage the deprecation process for my concrete and base classes but everything I do to an Interface results in an incompatible API change.

If you are using Interfaces as part of the abstraction of your library (and generally you should be see my post on Dependency Inversion) then you will run into a really frustrating reality – everything you do to an interface will basically end up being a breaking change in your API.

Yes you can also provide concrete base classes that most people subclass to fulfil the interface, and yes you may have updated the base class with sensible default behaviour, and yes this does mean that most of your users (if not all in practice) have no code to change to be able to recompile after a library update. But some might and this makes it breaking. The fact you exposed an interface to allow for any implementation of those abstracted needs by classes outside your library does mean your users have got a lot of flexibility in the abstraction, but with semver it also means every change you make to an interface really is a breaking change.

You could use techniques like starting to build version numbers into your interface or namespace names to avoid breaking changes if you have to – but that will create far more work for you and your libraries users than you save in the long run. Either accept the change and bump the major version number, or postpone the change until the next planned major version number change. Either way you may very well find you’ll have to plan changes to your interfaces a lot stricter after adopting semver than they were before.

A final tip you may want to consider if you have a lot of “standard” interfaces as part of your library may be to place these into their own library separate to the rest of your code. You can then use your package manager (e.g. Nuget) to make the main library dependent on the interface library/package. You would then be able to bump your “interface library’s” major version number correctly, but keep tight control over the major version number of the main library people use. Doesn’t solve everything, but for many it is a good balance to work around a very specific concern when using semver.

I use nuget (or a similar packaging tool) but how do I handle changes to version numbers that only affect the package not the library itself?

Semver versioning does not make specific provisions for packing tools so its down to the packing tools themselves to recommend an appropriate approach. Like most packaging tools Nuget’s recommendation is to use the same version number as the library being packaged. This makes sense and avoids a lot of confusion for end users, however it does leave the question: what do I do if I have to put out a new version of the package (e.g. because package scripts or metadata has changed) but the underlying library hasn’t changed version number.

Some packaging systems, such as NetBDS’s pkgsrc, handle this by having a separate package version numbers that are appended to the official version number to make the full version number for the package (see Where this is available you should make use of these package version numbers as a sequential number and consider it separate to the libraries own semver version number. It is good practice to reset the package number to 0 after the libraries own version number changes however.

In the case of Nuget there is not yet a separate package version number, so if you want to stick to the guidelines of using the same version number as the library you’re going to have to make a decision if you find a package only change that needs to be released. If you have control over both the library and the package, then consider the change the same as any other “patch” and bump the patch number. If you don’t have control over the libraries version number, you are going to have to decide if you can risk updating the package “in place” without a version number bump – not recommended as existing users will not get the changes and user support can be confusing with multiple package versions with the same name – or if you may need to temporarily move the package away from semver versioning for the package (not the library) to give a number x.y.z.p where p is the package revision number, until the next relese of the library comes out and lets you move back to matching the libraries semver version.

My version number used to represent sideways compatibility with closely related libraries, how can I keep this while using semver?

Semver version strings are used to represent a single aspect of version control – API compatibility. Other features that version strings can be used for – such as marketing numbers (covered in detail above) and sideways computability to specific versions of other libraries is not included in the version number design. This means you’ll need to handle it a different way.

You may choose to follow semver rules in how your version string is constructed, but match your release cycle to the original library. Although you normally would only reflect internal API changes in your library’s version number, not those of its dependencies, if you feel your library is tightly coupled and needs to match the same version number, then you could reflect this in your version number and mirroring it in your version numbers. This can be a significant help to your libraries user’s some time especially if there may be a slight delay between the dependent package being updated, and your own becoming available that’s compatible with it.

In cases where the coupling is looser, you probably don’t want to be tied into the release model and version numbers of the other library anyway, so now would be a good time to break it. Include compatibilities in the package metadata description, release notes, or project website. If your using a package manager for distribution that manages inter-package dependencies anyway, the your end user is unlikely to notice the difference anyway and will soon adjust if they did use your old version numbering style anyway.



Above I’ve tried to tackle some of the major worries around semver and controlling the major version number. If you have anything to add please feel free to do so in comments or direct feedback.

As someone involved in both creating libraries and package management systems for many years, I can see only good in a standardised version numbering scheme such as semver becoming as widely used as possible. Yes they have their problems, but every choice in versioning is a compromise at some point. It would be great a few years from now if we’re looking back at times where everyone managed their version control strings in different ways as a way of the past. But this only happens if everyone plays their part and sticks to the standards strictly and completely.

Should you adopt semver? Yes you should– but make sure you plan for its adoption properly and change your internal processes to match. This way you’ll be able to look back on the move to semver as an empowering choice to join developers manage dependencies collectively, and not as the point your project lost control of your version numbers.


The Open Source vs Commercial Development Myth

The two can Co-Exist

Looking around the internet you could believe that open source software development, and commercial software development, are opposing forces that can never meet or work well together.  All experienced software developers know this is simply untrue, and yet the myth seems to perpetuate anyway.

By being careful, and being keeping with the spirit of the freedoms represented by the open source community.  It is possible for open source developments to benefit commercial software, and commercial developments to benefit open source software.  In the almost two dacades that I’ve now been involved in software development both open source and commercial;  I have never found a conflict between the two ideals that couldn’t be solved in a way that helped everyone.

Understanding the Types of Open Source License

The first thing you need to understand is that not all open source licenses are equal.  There are primarily two kinds of open source: copyleft and permissive.  For commercial software development I find it easy to divide the copyleft licenses into “strong copyleft” and “weak copyleft”.  Let me cover the three categories briefly here:

Strong Copyleft

When a software system or library is placed under a Strong Copyleft licenses the author is indicating two things:

  1. He wants others to be able to use the code he has produced.
  2. He feels the code produced has value enough to ask others to share their own changes to the code and share systems using the code under the same terms.

The most prominent Strong Copyleft licenses are: GPLv2 and GPLv3.

Strong Copyleft licenses are sometimes described as having a “viral” affect.  This is due to the fact that any derived work has to be distributed under the same license terms.  In the spirit of the license a derived work is usually intended to include software that references or links to Strong Copyleft software libraries as well as altered versions of the original.  This means for example you can only use GPL code in your commercial application, if you are happy to now distribute your commercial application under the terms of the GPL.

Personally I would describe Strong Copyleft licenses as the least friendly for commercial software development and many commercial development companies have to avoid it to meet the IP desires of customers and business owners wanting software developed.

Weak Copyleft

When a software system or library is placed under a Weak Copyleft licenses the author is indicating three things:

  1. He wants others to be able to use the code he has produced.
  2. He feels the code produced has value enough to ask others to share their own changes to the code under the same terms.
  3. He is happy for the library to be used in closed-source products.

Weak Copyleft licenses are most often used for libraries.  The author wants bug fixes and enhancements added back to the library so it can continue to improve, but doesn’t care about how you license the software that utilises the library.

The most prominent Weak Copyleft licenses are LGPLv2.1, LGPLv3, MPL, and MS-PL.

Weak Copyleft licenses are mostly about encouraging you to share fixes and enhancements to core functionality and keeping those fixes and enhancements “free”.  It does not generally have the same “viral” affect as strong copyleft licenses because software that references or links to weak copyleft software libraries as not intended to be considered derived works.  This means you can use LGPL code in your commercial application, and distribute your main software under any license you want, however any changes you make to the copyleft library must be distributed under the original copyleft license.

Personally I would describe Weak Copyleft licenses as very friendly to commercial software development.  It encourages you to get benefit from somebody else’s effort, and simply asks for improvements to be shared in return.


When a software system or library is placed under a permissive licenses the author is indicating three things:

  1. He wants others to be able to use the code he has produced.
  2. He’d like some credit for the work he has put in.
  3. He is happy for the code to be used in any future open source or closed source application.

The most prominent permissive licenses are BSD, MIT, and Apache.

Permissive licenses are about preventing wasted effort while people to reinvent the wheel.  The functionality of the code is shared freely for open source or commercial use.  There is no “viral” affect, and you do not need to contribute changes back to the original author or project team, although this doesn’t mean you shouldn’t.

Personally I would describe Permissive licenses as very friendly to commercial software development.  It encourages you to get benefit from somebody else’s existing effort rather than wasting time reproducing the same functionality.  In exchange usually all that is asked for is acknowledgement of the codes origins.

Keeping the Spirit of the Original License Choice

Its important to understand that open source licenses work within the restrictions of the existing copyright framework.  This differs slightly country to country, and generally only forms contracts between people code or software is “distributed” to.  If you are not careful you can get caught up in questions of “can I do this with license X?” or comments “I don’t have to do that because license Y says you are not entitled to it”.  This is not what the open source community is about.  I believe that as well as keeping to the letter of the licenses, you should honour the spirit of the license regardless of the “additional rights” awarded by copyright laws in your country.

If you take a GPL library you want to use, and try and come up with ways to use the software “indirectly” to avoid putting your own code under the GPL, stop and think again.  The code you want to use was shared by the author because he wanted to see enhancements to it shared too.  You may not want to share your innovations., you may not like the GPL.  But that was the intention of the author and you should honour it or choose not to use the GPL code.

Likewise don’t think that because you upload copyleft software you developed to a web server you didn’t “distribute” it so you can keep the source closed.  And don’t create a “wrapper library” around an LGPL library where you add your new functionality so you don’t have to share it.  This isn’t the intention of the author when they let you use their code, and you shouldn’t abuse their intentions for your own gain.

If a product is dual licensed under copyleft and non-copyleft licenses, don’t constantly push the boundaries of what you can do with the copyleft version, contribute to the funding of the library or software by buying a license so it can continue to improve.

Giving Back

There are many ways to give back to the open source community as an commercial software developer.  Some are really simple but under commercial pressures are often ignored.

1. If you fix a bug in a library, no matter what license its under, submit the change back to the author or project team.

2. If you make minor enhancements that are generically useful, submit them back to the author or project team.

3. Don’t contribute incomplete changes or changes specific for your needs only back to the author or project team. These changes usually carry no reusable value and can waste the project’s time tidying them up for inclusion.

4. If you create a library or tool because you couldn’t find the one you needed in the market, and its not something you are wanting to commercialise on its own, make it available under an open source license you are comfortable with so others don’t have to repeat your effort.

5. If you create a useful program, library, or tool; consider duel licensing a “community” edition under a copyleft license.  Yes its true some developers and organisations won’t honour the license, but generally those people would have broken your commercial license terms if they had a chance too.  This dual licensing can be particularly useful for libraries as it can attract the attention students and others keen to learn your libraries or software, and this can in time become a major source of paid license users in the future.

6. Even if you are working on an open source project, don’t change the license of code from permissive to copyleft by adding enhancements and fixes under your project’s copyleft license rather than the original permissive license.  This practice has long been a point of contention between advocates of permissive licenses and those using their code in copyleft projects.  The overall project can still be copyleft, while honouring the spirit of the license for permissive code you use and giving back under the same license.

Commit Pitfalls

Here are a few of the pitfalls that can happen if you’re not careful combining open source software with commercial software products.

1. When you use an open source library don’t assume the author will carry on enhancing it in the future, and don’t assume someone else will fix the bugs.  Many open source projects are maintained in people spare time, and so every day some projects go stale as project teams move away, or change direction.  If you use an open source library in your commercial product, remember that you must plan to be able to maintain it in the future.

2. Always check the license before using an open source library.  Is it compatible with the IP requirements of you and your customers?  Does it have an explicit non-commercial use clause?  Its even worth checking this if you think you “know” the license the project is under.  Its not uncommon for exceptions or additional clauses to be added on top of standard licenses.

3. Don’t expect you can open source a dying product to “hand over” maintenance of it a community.  Be aware that the original author or team of an open source product, always end up the major contributors to that project long term.

4. Don’t assume because you didn’t pay for something, its cost is zero.  You still need to integrate the software or library into your solution, and you still need to train developers on its code base so you can maintain it within your SLAs.

5. Don’t dismiss a GPL library until you’ve checked the license.  Some projects prefer the use of the GPL over the LGPL but add explicit exceptions for the linking of close-source software.


Open source and commercial software can benefit each other, even if their license terms sometimes seem contradictory.  By following the license requirements, and keeping with the spirit that caused the author to open source their software in the first place, the open source projects can benefit from bug fixes and enhancements from commercial users of their software.

Closed source commercial software can benefit by using stable versions of open source libraries reducing development times and potentially delivering a richer experience.

Both open source and commercial users benefit from a dual licensing scheme where it is appropriate, with the larger user base providing a useful support and learning network, as well as a pattern of many active open source users progressing to paid services in the future.

The small print: I’m not a lawyer so please take this article as me sharing my experience as an open source and commercial software developer rather than legal advice about licenses and copyright law.

C# Code Guidelines

Coding Guidelines

Every development team or software development team need guidelines to follow to help them write consistent code that keeps maintenance costs low, and development productivity and code reuse high.

I recently updated the ones I use with my team, and though I’d share them to save others having to create their own.  Feel free to reuse in part or full for yourself or your team, and leave a comment with any general suggestions.  Style can be a very personal thing, so don’t be afraid to adapt them to meet your teams own preferences, the key is that all the team follow the same convention, and wherever possible that convention matches the underlying framework you are using.

Naming Conventions


Use PascalCasing for namespaces, types, and member names.

Use camalCasing for local variables, parameters, and user interface fields.

Use camalCasing with an “m_” prefix for private non-user interface fields.

In 3rd Party Templates or code that uses a “_” prefix within a class rather than a “m_” prefix then be consistant within the class and either rename all to “m_” or continue with the “_” prefix.

Type Notations in Variable Names

Hugarian Notion must not be used – When the type is an important part of the variable’s purpose include it in the name by prefixing or postfixing it to the name without abbreviation.

Do not use abbreviations – If an acronym is widely accepted and used in the framework or toolkit being referenced then it acronym may be used despite the rule to avoid abbreviations.

.NET Word Conventions

Following .NET conventions for the words:

  1. Indexes (not Indices)
  2. UserName (not Username)

Use the following symmetric words when defining pairs of functionality:

  1. Add / Remove
  2. Insert / Delete
  3. Create / Destroy
  4. Initialize / Finalize
  5. Get / Set
  6. LogOn / LogOff
  7. Begin / End
  8. Register / Unregister

Use the following American words rather than the British words for member names:

  1. Color
  2. Initialize

Use of Plural and Singular Words

Name all classes with a singular word or phrase.

Name all collections or with plural words or phrases.

Name all namespaces as plural words or phrases unless the namespace has a special meaning in the MVC framework.

Compatibility with COM and other CLR Languages

Do not name two public or protected members with names that are the same excluding character case.  This would prevent reuse of the classes in languages such as VB.NET.

When an argument for a class is passed in to a method or constructor with the purpose of setting the public member, then use of the same name in camelCase rather than PascalCase is recommend over prefixes or suffixes.


Name all static singletons that initialise themselves “Default” unless there is a specific reason to use another name.

Name all static singletons that do not self-initialise “Current” unless there is a specific reason to use another name.

Simple Names

The following simple names are allowed for the uses specified:

  1. obj – to store a generic object who’s type is unimportant.
  2. i, j, k – for control of loops.
  3. s – to store a string value of a variable already in scope in a different type.
  4. e, ea, ee – for subclasses of EventArgs.
  5. e, ex – for subclasses of Exception.
  6. item, it – for the control variable in LINQ query expressions.

Use of Types in Variable Declarations

Use the most specific type available in a variable declaration that is not initialised in-line.

Use var for types that are initialised in-line.

Do not use var for types that are initialised in-line to values of a standard type (e.g. int, string, decimal etc.).

Use var for variables that are initialised as the result of method calls that return collections.

Use var for variables that are storing anonymous types.

Use var when the exact type of a variables is unimportant to the method as more than a return value from an invocation.

Use object for types that are initialised as the result of method calls where the return type is going to be worked with only via reflection.

Use interfaces for variable types if the work being done is dependent only on the interface.

Use dynamic as a variable type only if the type could not be known at compile time, and the code is not going to reflect on the type.


Constructor Performance

Avoid use of database connections in constructors.

Avoid use of network calls in constructors.

User Interface Design Time Requirements

Constructors for User Interface classes must be design time compatible.

Member Initialisation

Do not add constructor overloads that perform member initialisation as part of its parameters, these should now be handled by the member initialisation syntax.

Factory and Dependency Injection

Provide a zero parameter constructor unless the class is entirely unusable without a parameter.

Ensure the zero parameter constructor is suitable for use in a class factory or service locator.

Ensure the zero parameter constructor is suitable for use in dependency injection.

Ensure constructors do not over initialise to prevent subclasses from changing implementation details.

Virtual Calls

Avoid calling virtual members from constructors as the behaviour is unpredictable.

Static Constructors


Avoid initialising static members within static constructors.

Wherever possible initialise static members on first use.

Asynchronous Code

Use the await and async keywords when creating asynchronous code.

Prefix “Async” onto all methods that need to be awaited.

Do not prefix “Async” onto any method that cannot be awaited, even if the method contains asynchronous code.

When blocking methods are part of a Portable Class Library and cannot use the await and async keywords, use extension methods with the “Async” prefix to provide asynchronous alternatives for platforms that support it.

When to use Properties, Methods, and Extension Methods


Use a property if:

  1. The functionality behaves like a field.
  2. Is a logical attribute of the type.
  3. Is unlikely to throw exceptions.
  4. The contained code has minimum value when debugging.
  5. Does not have a dependency on the order being set.

Never use a property if:

  1. A get implementation is not provided.


Use a method if:

  1. The operation is a conversion.
  2. There is an observable side-effect from the call.
  3. The order of execution is important.
  4. The method may not return.
  5. The method may run code asynchronously.
  6. The result should be cached for reuse within the calling method for performance.

Do not use a method if:

  1. The return value is a collection that remains linked to the instance.

Extension Methods

Use an extension method if:

  1. The operation needs extend a sealed type.
  2. The operation needs to apply to anonymous types.
  3. The operation needs to apply to general IEnumerable types.
  4. Base functionality is provided for an Interface.

Do not use extension methods if:

  1. The behaviour may want to be specialised by a base class.

Place extension methods that form part of a classes or interfaces core API, or platform specific extensions to the core API, in the same namespace as the class, even if it is provided by another assembly.

Always place extension methods that extend core .NET types outside of the System.Collections namespace in a namespace ending in “.Extensions” to avoid littering the namespace of these core objects.

Method Arguments

Name method arguments with names that inform the caller of the intended purpose, not of its internal use.

When a Boolean argument is used as a flag to change fundamental functionality of a method, always call it using the “name: true” style.

Use Boolean arguments called with the “name: true” instead of two member enumerators for all methods, unless the method extends an API where an existing enumerator is better suited.


Catching Exceptions

Only catch exceptions of specific types if the error message being returned will be specialised for the type.

Catch the generic Exception type to isolate calls from the calling code.

A user must be informed of an exception with an error or warning message.

An error or warning message can be omitted if an exception is caught and ignored to avoid a known framework or platform issue and the issue is commented within the catch block.

Do not catch the general Exception type and hide its value from the user.

Throwing Exceptions

Use the existing Exception types to throw your own exceptions.

Only create Exception subclasses when they are to be used multiple times within a library.

LINQ, for, and foreach.

Use LINQ for queries to external data sources.

Use LINQ for queries over in memory data that is designed for use as a dictionary or database.

Use foreach () for iterating over any IEnumerable.

Use var as the item type for the foreach() loop whenever the IEnumerable implements IEnumerable<T>.

Use for() when the iteration is controlled by a numeric block.

Use for() to walk hierarchies.

Use for() or while() when the iteration is controlled by a non-numeric exit condition.

Use do { } while () only where it reduces code compared to a for() or while().


Member Comments

Every public member or class must be commented with an XML comment.

Every protected method and property must be commented.

The comment must be written to explain why somebody would want to invoke the code.

The comment must not be a substitute for a bad member name.

The comment should not attempt to list all exceptions that could be thrown but should exceptions of special Exception subclasses thrown directly by the code.

Do not comment private or internal methods that have obvious use.

Do not comment event handlers unless their implementation is non-obvious.

Ensure member comments are suitable for extraction into API documentation.

Do not repeat the XML comment for overridden methods if the comment has nothing new to add to the base type comment.

Do not repeat the XML comment for the implementation of a member for an interface if the comment has nothing new to add to the interface member comment.

Code Block Comments

Comment code with headers within blocks that perform multiple tasks discrete as part of its implementation.

Comment code blocks that cannot be understood by the code alone.

Comment code wherever a special or magic value is used.

Comment code when a condition in a control block could be misunderstood.

Comment Styles

Use /// Comments when commenting members or classes.

Use // Comments when commenting code blocks or statements

Use /* */ comments only if the comment has to be placed in the middle of a line of code or within a Razor syntax document.

Special Developer Comments

Always mark comments that are reminders of code to fix with “TODO” in upper case followed by a message.  This allows all TODO items to be found and completed or before the release of any solution.

If the last statement of a non-void method is not a return statement, place the lint style comment “/* NOTREACHED */ at the bottom of the method so developers know to maintain this when editing the code.

Braces and Brackets

Brace Positioning

Place opening braces for classes, namespaces, and members on new lines.

Place opening braces for code blocks such as if, for, foreach, while, do, and switch on the same line as the code control statement.

Place all closing braces on new lines.

Place both the opening and closing braces for automatic properties on the same line as the property name e.g. public int MyProperty { get; set; }

Place both the opening and closing braces for anonymous types used as property bags on the same line e.g. Html.Link(“Test”, new { this = “that” })

Place else and else if statements on the same line as the closing brace and keep the next opening brace on the same line too.

Bracket Positioning

Place a space between keywords such as if, for, foreach, and while, and the opening bracket.

Do not place a space between a method name and its opening bracket.

Keep the closing bracket on the same line as the opening bracket in the argument list is small.

If the argument list for a method call is long: keep the opening bracket on the same line as the method name, place each argument on a separate line ending with a comma, and place the closing bracket on a new line.


If like me you’ve worked with C# and the .NET framework for years then you will probably have written variations on following code hundreds of time when trying to display values on screen or save values into text based files or SQL statements:

object rawValue = SomeMethodCall();
string displayValue = String.Empty;
if (rawValue != null) {
    displayValue = rawValue.ToString();

The code itself is simple enough to get right first time, and easy to read and understand, but after you’ve written it a few dozen times it starts to appear like unwelcome rash across your code. A programmers next natural instinct is to see if there is a way to shorten the code.

Unfortunately the ?? operator can’t help us here unless we are exclusively dealing with strings for rawValue (in which case why are you calling ToString()?). We can however shorten things substantially with the ? operator:

object rawValue = SomeMethodCall();
string displayValue = (rawValue == null? String.Empty: rawValue.ToString());

This has helped and taken our code from five lines to two, and hasn’t noticeably affected readability. But if we are only interested in the displayValue woundn’t it be nicer if we were able to just do:

string displayValue = SomeMethodCall().ToString();

This code will execute fine, but as soon as a null object is returned from SomeMethodCall() we’ll get a NullReferenceException raised and if we didn’t see it in testing, our end users will see an unhandled exception we should never have introduced.

If we try to use the ? operator directly with the method call and the best you can get is:

string displayValue = SomeMethodCall() == null? String.Empty: SomeMethodCall();

You can tell immediately from the code that this would be at best wasteful if not potentially damaging depending on the side effects of SomeMethodCall(). Under normal circumstances, where SomeMethodCall() doesn’t return null, we end up executing SomeMethodCall() twice. If the method accesses a web service or database we will have potentially doubled the impact of the code on the server, and slowed down the user experience.

What can be done then? Should we just put up with the two line of code where one would work? Until recently I would have said yes, but with LINQ usage on the rise I’ve started to see this particular problem causing ugly code to be written for lambda statements, or worse developers knowingly being lazy with their handling of potential null values when calling .ToString()!

We can actually use a little used feature of extension methods to help us with this problem. As you will know extension methods allow us to invoke static utility methods in a syntax that mirrors invoking a member of a class. The compiler understands the code that calls an extension method and effectively re-writes the syntax from a member call to a static method call for us. To get an idea of how this works take a look at my previous post.

Because the member-like syntax is converted into a static method call, it is possible to call the member on a null reference.  Therefore the first line of the following code would throw a NullReferenceException, but if MyExtensionMethod() was an extension method that handled a null specially, the second will not.


Using this technique we are able to create extension methods that special case null values, but maintain the readable member-style syntax.

I’m going to throw a strong word of warning in here now. When you add an extension method that does not behave like a member method call, particularly ones that don’t raise a NullReferenceException when called on a null variable you are moving away from basics that programmers take for granted when reading code. Used incorrectly this can make code harder to understand and therefore harder to understand. You should be sure about what you are doing before you add extension methods that expose this non-standard behaviour. Its been my experience that methods that should follow this behaviour are almost always involved with displaying values as strings, or converting values between types to pass to an ORM or similar module.

My personal convention to make sure I can identify where the technique has been used is to suffix the method name with “Safe”, so the call above would be MyExtensionMethodSafe(). If everybody in the team follows this convention when they feel there is a genuine need for the extension method to treat nulls differently to a member method call then the code remains easy to read. Don’t forget however that even if you’ve adopted this convention throughout your team, you will still have to train new people joining the team on the convention.

Now with that warning having been strongly stated, lets return to looking at the problem at hand. In this case I believe it makes very good sense to provide a “Safe” extension method companion to ToString(). Here is the code for the ToStringSafe() extension method in full:

namespace Mvpc.Extensions
    public static class ObjectExtensions_ToStringSafe
        public static string ToStringSafe(this object value)
            // Nulls just return empty strings.
            if (value == null) {
                return String.Empty;

            return value.ToString();

If you read my post on async extension methods you will already know that I recommend placing any extension method that works on string, object, int, or any of the core types of the .NET Framework under a namespace ending in “Extensions” so they don’t litter the initilisense when the user doesn’t need them. This rule applies here too as the majority of code will not want to use the ToStringSafe() method.

After adding a using for the Mvpc.Extensions namespace it finally becomes simple to write:

string displayValue = SomeMethodCall().ToStringSafe();

We finally get our five lines of oft-duplicated code down to a single readable line.

Since introducing this new method I’ve completely stopped seeing programmers being lazy with their .ToString() handling of nulls inside lambda statements. Hopefully you will see the same too as well as being able to produce more readable null-safe code.

Async extension method wrappers

Asynchronous APIs are becoming more popular thanks in part to the focus on asynchronous user interface design requirements on platforms such as Windows Store Applications for Windows 8 and Windows RT.

This attempt to change the way developers think about long or unpredictable operations is welcome and necessary as databases and files slowly migrate into the cloud.

Unfortunately System.Threading.Tasks and the async and await keywords are not available inside portable class libraries or some of the platforms we target with the Ambidect technology.  We could choose to use an alternative style of asynchronous API, such as call backs, but these are starting to look dated, and require the developer to do a lot more boilerplate work to use.

At this point you may be tempted to give up and provide only an blocking synchronous API and require the developer to manage their own threads on each platform that insists on asynchronous calls; but as I touched on in my previous post about the repository API, we felt it was a much better idea to provide an async API and did so using extension methods.

This technique can be used to wrap almost any synchronous API; but I have to stress at this stage it should only be used if the platform or platforms you are working on do not have a usable native asynchronous API.

For our example we’ll work with the Find method of the IRepository<> interface, but the principles here will work with any synchronous call that needs to be wrapped.

The first thing we need to do is create class to host our extension methods:

using System;
using System.Threading.Tasks;

namespace Mvpc
    public static class IRepositoryExensions_Async
        // TODO this is where to put your extension method code.

If you are not familiar with extension methods, the reason we mark the class as static is because its a requirement of the extension method support of the compiler.  The name of the class can be anything you want, but you can see that I use a simple convention that makes it clear to anybody using the library that the class contains extension methods, so is of no interest to be used directly.

You will note that the class here has been put into the Mvpc namespace to go alongside the class we are wrapping.  When using extension methods that extend the API with asynchronous members this is my recommended approach.  It stops the developer of the class worrying about how the async methods are provided, and intilisense will include them in the list of available class members when working on a platform that supports our asynchronous API.

In cases where the extension methods provide utility functions rather than a core API for a class, it is good practice to keep your extension methods in a separate namespace, e.g. Mvpc.Extensions.  This stops the intilisense list being over-populated with extension methods that are not relevant to the code at hand.  When extending one of the CLRs core types such as object or string I always insist that the extension method goes into a namespace ending in “Extensions” such as Mvpc.Extensions that the developer has to explicitly opt into.  This not only helps keeps the intilisense clean, but also stops accidental dependencies on the specific extension methods creaping into code blocks where they don’t belong.

Now we have a class setup and have decided the right namespace for the class lets add an extension method.  An extension method is exactly the same as a normal static method, except its first parameter is prefixed with the “this” keyword.  This instruction tells the compiler that it can effectively rewrite the extension method call into a static method call, while allowing the developer using the extension method to use a more natural calling convention.  For example instead of having to call:

var repository = ...;
var key = Guid.NewGuid();
IRepositoryBaseExtensions_Async.FindAsync(repository, key);

We can use the much more readable:

var repository = ...;
var key = Guid.NewGuid();

Lets have a look at the code for the FindAsync() extension method itself now:

        public async static Task FindAsync(this IRepository repository, params Guid[] keys)
            var task = System.Threading.Tasks.Task.Factory.StartNew(() =>
                var ret = repository.Find(keys);
                return ret;

            return await task;

The first thing you will note is that we suffix the name of the method with “Async” I find this a very good convention to follow for any method that provides an async API that can have “await” applied to it. It helps the person using the method remember that at some point they will likely want to await on the result.

If you are unfamiliar with the async and await keywords I suggest you have a look at them in the MSDN documentation sufficient to say here that you use async to mark a method as containing asynchronous code, and await to safely wait for the result of an async method before continuing.

As well as the async keyword you will notice the method is also marked as static as it will operate without a class instance and the first parameter is prefixed with “this” keyword to enable the extension method style shorthand call to the method.  We can still call the static method directly if we want, but without the “this” keyword the extension method syntax would not be available when calling the method.

Inside the method we create a new Task with the right return type and use an lambda expression to perform a call to the synchronous API we are wrapping.  This code will be executed in a separate thread before returning its result.  Exactly when the code pauses to wait for the result depends on how we use await when calling the extension method.

More often that not when we will want the code to wait for the value before continuing so we will use await directly on the async call as follows:

var repository = ...;
var key = Guid.NewGuid();
var item = await repository.FindAsync(key);

In this post we’ve wrapped a synchronous call to a repository function with an async extension method, but you can use the technique whenever you find you need to make regular asynchronous calls to a class that couldn’t be built with an asynchronous API, or to which you do not have access to the source to extend with a native asynchronous API yourself.

Increase readability with the var keyword and DRY in C#

When the var keyword was first added to the C# language many developers shyed away from it believing it to be a “Variant” type like found in VB.NET or the equivalent of declaring an variable as an object.  Both of these are wrong, but despite this I’ve come across plenty of companies that still ban the use of var in their coding standards stating that it is not type safe.

The var keyword is completely type safe and is actually the equivalent of you typing the name of the type yourself but deciding it was easier to let the compier type the whole name for you.  I find it better simply to look at the var keyword as a timesaver that instructs thecompiler that there is no reason for you to specify the type because: 1. anybody reading it can see the type immediately.  2. The exact type is unimportant as long as it meets the requirements of the code.

Its also worth pointing out at this stage that with modern IDEs I also consider the use of embedding type names into variable names using Hungarian notation or other similar approaches is also not only bad practice, but dangerous compared to proper use of DRY (Don’t Repeat Yourself) principles.

I use the var keyword all the time and find it makes code more readable, not less readable, especially when working with long type names.

For example line one here is much easier to read than line two. In fact in line two you have to search just to find the name of the variable.

Example 1

MyType hello = new MyType();
System.Collection.Generic.Dictionary<string, MyLibrary.Namespace.MyType2> goodbye = new System.Collection.Generic.Dictionary<string, MyLibrary.Namespace.MyType2>();

Using var both lines are equally easy to understand, and the name of both becomes the primary focus of the line rather than the type:

Example 2

var hello = new MyType();
var goodbye = new System.Collection.Generic.Dictionary<string, MyLibrary.Namespace.MyType2>();

Not to mention that you now have one less place to change if you decide to use MyType3 instead of MyType or MyType2 in this code block.

Using var on lines that already perform a cast can give similar readable and time saving advantages.

Example 3

var world = (Button)sender;
var universe = sender as System.Windows.Form;

I also recommend using var for variables that are used to store results from methods where the type is unimportant either because we simply returning it or passing it to another method, or in the of enumerables, we have to specify the type name when we use it anyway.

Lets have another couple of real world examples:

Example 4

var value1 = GetValueFromDatabase(1);
var value2 = GetValueFromDatabase(2);

var value3 = Combine(value1, value2);
return value3;

Reading the example alone you have no idea what type var is. This can upset some people, but if you are using Visual Studio its easy enough to mouse-over each “var” keyword to see the type that’s being used. But if you stop and think for a moment the reason you don’t know each type is because the current code doesn’t need to know. By practicing DRY here you’ve actually created code that’s much easier to maintain and is completely type safe. If in the future GetValueFromDatabase() was changed to return a decimal instead of an int it wouldn’t matter as long as Combine() had an overload that accepted decimals as parameters, or was changed at the same time. If we don’t use var then we would have to edit the code ourselves to switch value1, value2, and value3 from int to decimal, even though the changes has had no real affect on the current code block.

There are of course times when specifying the type explicitly over using var gives important extra information to the user then it should be used instead of var, but you will find these situations are few and far between. I might choose to use “int” for example if I was getting an int defined and returned from a method call in one line, but only if the fact I was performing some integer rather than floating point maths is important within the current code block. Otherwise I’m just making it hard to change the methods definition to work with double or decimal in the future.

Example 5

var form = new Form1();
var res = form1.ShowDialog();
if (res == DialogResult.Cancel) {
    // ...

We’ve already talked here about why line 1 is good practice, but we’ve been “lazy” on line two and used “var” even though we have full knowledge that the return type will be System.Windows.Forms.DialogResult, and whats more that return type will never change as its part of the core .NET framework. Why is var useful in this context then? The better question is why actually would you “repeat” what you already know and specify later here anyway?

You and everybody else who is used to the System.Windows.Forms namespace knows the DialogResult type inside out. But what about people new to the toolkit, is the code still readable to them? I would argue that because we always specify the enumerable name on use, and the only purpose of the variable res is to be checked, specifying the type explicitly gains us nothing in readability, but does cost us more key presses.

I guess we can’t really say line 2 would “repeat yourself” in the same way as declaring a variable with both an explicit variable type and the new keyword on the same line, but we can say if we explicitly put the type on line 2 then we know we are planning to repeat ourselves on line 3, so here we are practicing Don’t Plan to Repeat Yourself to help us keep the DRY pricinple and keeping our code shorter by result.

If you’ve shed away from the var keyword yourself until now hopefully you are now inspired to give it a try and not just when your forced to using anon types and LINQ. You will find that when you follow the suggestions in this post your code will not only start to practice DRY but will actually increase in readability as the code you write become much more focused on the true dependencies and functionality of the method, and not the types your working with.

OfType() and Cast() with System.Type instead of Generics

We all know that whenever possible we code should be written to be type safe. But there are times when its simply not possible. Once such time we came across when putting together the Mvpc libraries behind the Ambidect Technology involved working with Cast<>() and OfType<>() with IEnumerables of unknown types.

Working with collections of known types is as simple:

var myCollection = collection.Cast();
var myCollection2 = collection.OfType();

But what do you do when all you have is a System.Type?  Sure you can try and avoid the situation but sometimes it really is bad design for the code to know the element when all it cares about is the fact we have an IEnumerable.  Yet other times the type may not even exist until it is emitted at runtime either by ourselves, or by a Json or similar library wrapping a web service.

Thanks to reflection it is possible to implement Cast(Type type) and OfType(Type type) in a cross platform way and cope with these cases when they arise.

The first thing we need is a normal generic method we can call. For Cast<>() we can define it as follows:

        private static IEnumerable CastInternal(System.Collections.IEnumerable source)
            return source.Cast();

Nothing noteworthy in that code. Now we just need a method we can pass a System.Type to. First the code then we’ll take it line by line:

        public static System.Collections.IEnumerable Cast(this System.Collections.IEnumerable source, Type elementType)
            var methodTemplate = typeof(IEnumerableExtensions_UntypedCasts).GetMethod("CastInternal", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static);
            var genericMethod = methodTemplate.MakeGenericMethod(elementType);
            return (System.Collections.IEnumerable)genericMethod.Invoke(null, new[] { source });

This code can look very confusing if you haven’t used the System.Reflection namespace before but its actually very simple.

Line 2 uses reflection on the current type and gets the CastInternal() method we defined in the previous code block. (In the example code we’ve wrapped the extension method in a static class called IEnumerableExtensions_UntypedCasts. You will need to change the type name if you add the code to a class with a different name). At this point the MethodInfo doesn’t point to a method we can call, but a generic definition.

Line 3 uses that generic definition to create a method that can actually be called. No use of the System.Reflection.Emit namespace here so the code will run on all platforms, even those that don’t support dynamic code execution. It also means we can keep it contained in a Portable Class Library.

Line 4 invokes the newly generated method and simply returns its value.

If you add these extension methods to a static class in your own code you can then call Cast() on an IEnumerable when all you have is a System.Type of the target element type:

var type = typeof(MyType);
var collection = originalCollection.Cast(type);

The definition of OfType(System.Type) is exactly the same with the Cast<>() method swapped for OfType<>().

Hope you find them useful for those situations where you simply can’t or shouldn’t know the element type until runtime.