Tag Archives: Mvpc

17 Years of Porting Software… Finally Solved

A History of Porting Software

I’ve been involved in creating and maintaining commercial and open source software for as long as I can remember, reaching back to 1996 when the world wide web was in its infancy, and Java wasn’t even a year old.

I was attracted to the NetBSD project because of its focus on having its software run on as many hardware platforms as possible.  Its slogan was, and remains “Of course it runs NetBSD”.

Although the NetBSD team worked tirelessly for its operating system to work across every imaginable hardware platform; much of the new open-source software development was talking place in for the i386 focused GNU/Linux operating system, not to mention the huge volume of Windows-only software that Wine tried, and mostly failed, to make available to people on non-Windows operating systems.

Advocates of cross-platform software like me were then constantly choosing between recreating or porting this software depending on its licenses terms and source availability just so we can use it on our platform of choice.

Some of the early of my open source contributions that are still available to download demonstrate this really well such as: adding NetBSD/OpenBSD support to Afterstep asmem in early 2000.  Or allowing CDs to be turned into MP3s on *BSD platforms with MP3c in the same year.

In 2002 when the large and ambitious KDE and GNOME desktops started to dominate the Linux desktop environments, I worked on changes to the 72 separate packages needed bring GNOME 2 and to NetBSD and became the primary package maintainer for a number of years.

As an early adopter of C# and the Microsoft .NET Framework I also worked through 2002 and 2003 to make early versions of the Mono project execute C# code to FreeBSD, NetBSD, and OpenBSD too.

The #ifdef solution

How was software ported between platforms back in those days?  Well to be honest, we cheated.

We would find the parts of the code that were platform specific and add #ifdef and #ifndef statements around them with conditions instructing the compiler to compile, or omit, different sections of code depending on the target platform.

Here is an example of read_mem.c from asmem release 1.6:

 * Copyright (c) 1999  Albert Dorofeev <Albert@mail.dma.be>
 * For the updates see http://bewoner.dma.be/Albert/
 * This software is distributed under GPL. For details see LICENSE file.

/* kvm/uvm use (BSD port) code:
 * Copyright (c) 2000  Scott Aaron Bamford <sab@zeekuschris.com>
 * BSD additions for for this code are licensed BSD style.
 * All other code and the project as a whole is under the GPL.
 * For details see LICENSE.
 * BSD systems dont have /proc/meminfo. it is still posible to get the disired
 * information from the uvm/kvm functions. Linux machines shouldn't have
 * <uvm/vum_extern.h> so should use the /proc/meminfo way. BSD machines (NetBSD
 * i use, but maybe others?) dont have /proc/meminfo so we instead get our info
 * using kvm/uvm.

#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>

#include "state.h"

#include "config.h"

/* sab - 2000/01/21
 * this should only happen on *BSD and will use the BSD kvm/uvm interface
 * instead of /proc/meminfo
#include <sys/types.h>
#include <sys/param.h>
#include <sys/sysctl.h>

#include <uvm/uvm_extern.h>
#endif /* HAVE_UVM_UVM_EXTERN_H */

extern struct asmem_state state;

#define BUFFER_LENGTH 400
int fd;
char buf[BUFFER_LENGTH];
#endif /* !HAVE_UVM_UVM_EXTERN */

void error_handle( int place, const char * message )
	int error_num;
	error_num = errno;
	/* if that was an interrupt - quit quietly */
	if (error_num == EINTR) {
		printf("asmem: Interrupted.\n");
	switch ( place )
	case 1: /* opening the /proc/meminfo file */
		switch (error_num)
		case ENOENT :
			printf("asmem: The file %s does not exist. "
			"Weird system it is.\n", state.proc_mem_filename);
		case EACCES :
			printf("asmem: You do not have permissions "
			"to read %s\n", state.proc_mem_filename);
		default :
			printf("asmem: cannot open %s. Error %d: %s\n",
				state.proc_mem_filename, errno,
	default: /* catchall for the rest */
		printf("asmem: %s: Error %d: %s\n",
			message, errno, sys_errlist[errno]);

#ifdef DEBUG
/* sab - 2000/01/21
 * Moved there here so it can be used in both BSD style and /proc/meminfo style
 * without repeating code and alowing us to keep the two main functions seperate
#define verb_debug() { \
       printf("+- Total : %ld, used : %ld, free : %ld \n", \
                       state.fresh.total, \
       printf("|  Shared : %ld, buffers : %ld, cached : %ld \n",\
       printf("+- Swap total : %ld, used : %ld, free : %ld \n",\
#define verb_debug()
#endif /* DEBUG */

/* using kvm/uvm (BSD systems) ... */

#define pagetok(size) ((size) << pageshift)

int read_meminfo()
      int pagesize, pageshift;
      int mib[2];
      size_t usize;
      struct uvmexp uvm_exp;

      /* get the info */
      mib[0] = CTL_VM;
      mib[1] = VM_UVMEXP;
      usize = sizeof(uvm_exp);
      if (sysctl(mib, 2, &uvm_exp, &usize, NULL, 0) < 0) {
        fprintf(stderr, "asmem: sysctl uvm_exp failed: %s\n",
          return -1;

      /* setup pageshift */
      pagesize = uvm_exp.pagesize;
      pageshift = 0;
      while (pagesize > 1)
              pagesize >>= 1;

      /* update state */
      state.fresh.total =  pagetok(uvm_exp.npages);
      state.fresh.used = pagetok(uvm_exp.active);
      state.fresh.free = pagetok(uvm_exp.free);
      state.fresh.shared = 0;  /* dont know how to get these */
      state.fresh.buffers = 0;
      state.fresh.cached = 0;
      state.fresh.swap_total =  pagetok(uvm_exp.swpages);
      state.fresh.swap_used = pagetok(uvm_exp.swpginuse);
      state.fresh.swap_free = pagetok(uvm_exp.swpages-uvm_exp.swpginuse);
      return 0;

/* default /proc/meminfo (Linux) method ... */

int read_meminfo()
	int result;
	result = lseek(fd, 0, SEEK_SET);
	if ( result < 0 ) {
		error_handle(2, "seek");
		return -1;
	result = read(fd, buf, sizeof buf);
	case 0 : /* Huh? End of file? Pretend this did not happen... */
	case -1 :
		error_handle(2, "read");
		return -1;
	default :
	buf[result-1] = 0;
	result = sscanf(buf, "%*[^\n]%*s %ld %ld %ld %ld %ld %ld\n%*s %ld %ld %ld",
	case 0 :
	case -1 :
		printf("asmem: invalid input character while "
			"reading %s\n", state.proc_mem_filename);
		return -1;
	return 0;

#endif /* (else) HAVE_UVM_UVM_EXTERN_H */

int open_meminfo()
	int result;
	if ((fd = open(state.proc_mem_filename, O_RDONLY)) == -1) {
		error_handle(1, "");
		return -1;
#endif /* !HAVE_UVM_UVM_EXTERN_H */
	return 0;

int close_meminfo()
#endif /* !HAVE_UVM_UVM_EXTERN_H */
	return 0;

It wasn’t neat.  It increased code complexity and maintenance costs, but it worked.  And we all accepted it as the best we had for now.

Hopes of a Brave New World

Like many cross-platform advocates, I had big hopes for Java and C# with the Microsoft .NET Platform.  But sadly we never saw the fulfilment of their “platform independent” coding promises.  Too many times we have to choose between a GUI toolkit for a platform and looking out of place.  Other times we had to P/Invoke to native APIs to get at functionality not exposed or reproduced by the frameworks.  Even now the GUI toolkit Gtk# is recommended over standard Windows’ System.Windows.Forms on Mono when creating C# programs for Linux or *BSD.

Cross Platform Toolkits such as SWING for Java and Qt for C++ sprung up to abstract the user from the platform they were working with.  But they were primarily GUI toolkits and their APIs only went so far, and eventually, like it or not, all but the most simple applications ended up with a native API call or two wrapped in an #ifdef style condition.

How Web Development Made it Worse

With the rapid increase in Web Development many saw this as finally the way to deliver software across multiple platforms.  Users accessed software via a web browser such as Netscape Navigator and didn’t need the code to work on their own PC or operating system.

Of course behind the scenes the CGI programs were still platform specific or littered with #ifdef statements if they needed to work on more than one server OS.  But the experience of the end user was protected from this, and it looked like a solution may be in the pipeline.

But then the Netscape vs Internet Explorer browser wars happened.  Browsers competed for market share by exposing incompatible features and having sites marked as “recommended for Netscape” or “works best in IE”.  People wanting to support multiple browsers started having to litter their code with the JavaScript equivalents of #ifdef statements.  The same happened again CSS as it became popular.  Nothing really changed.

Enter the Mobile

Then along came the iPhone, and made a bad situation even worse.

Those who went for a native implementation had to learn the rarely used Object-C language.  This helped Apple to avoid competition as developers scrambled to be part of the mobile revolution, but deliberately made portability harder rather than easier.  That still remains part of their strategy today.

People turning again to the web for solutions found that accessing web sites carefully formatted to look great on 1024×768 screens, now being viewed on a tiny mobile phone screen in portrait orientation – was ugly at best, but more often unusable!  And it wasn’t just about text size.  Touch and other mobile specific service meant users expected a different way of interacting with their applications, and browser based software felt more out of place than ever. Yes Responsive Web design and HTML 5 go a long way towards solving some of these web specific mobile issues, but it doesn’t take us away from the #ifdef style logic that has become an accepted part of web application development as it did C and C++ development before it.

So What is to be Done?

Most of this article has been about a history of failures to tackle cross-platform software head on.  Each attempt did bring us a little closer to a solution, but throughout we resigned ourselves to the fact that #ifdef style code was still ultimately necessary.

As application designers and developers we had to choose between how native our applications felt and limiting users from using our software in situations we didn’t plan for.

For almost two decades I’ve been involved in trying to overcome this cross-platform problem.  Now the landscape is more complicated than ever.  Can the same software really run without compromise both inside and outside the browser?  Can we really have a native look and feel to an application on a mobile, tablet, and as desktop PC, is wearable computing going to be the next spanner in the works?

All this is why to move forward, we went back to basics.  We thought first about how software was designed, rather than the libraries and languages that we used.  We first made the Mvpc design pattern, and only then did we make there reference libraries and the commercial Ambidect Technology (soon to be known as Ambicore).  Its fair to say that our many years of experience led us to be able to finally learn from the past we had been so involved with, rather than allowing ourselves to repeat our mistakes again and again.

Because Ambicore provides access to the whole .NET Framework gives a complete cross-platform API that’s developers already know.  Use of C# as our first reference language gives us access to the great thinking that went into creating the Java JVM and Microsofts IL environments that really can abstract us from the operating system and help us avoide #ifdef statements.

Providing native GUI interfaces for each platform means applications using the platforms own recommended toolkit helps applications look and feel native everywhere – simply because they are native to each platform.

Providing a design pattern that works equally well in request-response stateless environments and in rich state-full environments allows us from day one to provide a browser based experience for those who want or need it, as well as a native rich client experience for those wanting to get more from their Windows PCs, phones, tablets, Macs, Linux, *BSD, or…

Its taken 17 years of personal involvement, and recognising and listening to visionaries in the industry.  But by standing on the shoulders of others we re-thought the problem, knowing #ifdef statements were as much part of problem as they were a solution.  We redesigned the development pattern to be portable by default, not as an after thought.  And we based our reference libraries on trusted platforms from market leaders such as Microsoft to make our technology available to the largest pool of developers possible in a language, framework, and IDE they already know.

We are stepping into a new chapter of software development where the platform and device is there to enable, not restrict, the end user from the software they want.  And just as we stood on the shoulders of giants to get here – we want you to join us in the new world too.

Google Glass – time to make your applications wearable

Wearable Technology

I’ve been having a bit of fun with Google Glass recently.  If you haven’t come across Google Glass before, I’d describe it as a pair of glasses you can wear that give you personal, voice controlled, simple computer.

Now I don’t personally believe Google Glass is a product that is going to go mass-market in the way the iPad did in defining a new style of device, but I think as a prototype it’s worth looking at to understand where wearable computing may go in the future.

As the market progresses we’ll start to put screens in contact lenses and tap into the existing connectivity and power of the smart phones we already in each of our pockets, and then somebody will come along in the same way Apple did with the iPhone and take a ignored “prototype” competition and create a device and accompanying marketing buzz that will make wearable computers the next big thing.

The Impact of Wearable Technology on BYOD

Right now many large companies, Universities, and other organisations are in the process of brining in their own their own BYOD (bring your own device) policies.

One often unconsidered side effect of the flexibility BYOD can bring is that companies adopting these polices are in effect giving up their control over when new devices, or new types of device, enter their workplace.  This may not matter too much if a new smartphone comes out running an alternative operating system like Sailfish, but what about the first time somebody walks in wearing Google Glass or a similar technology.  Do the policies cover use of its real-time recording equipment?  Will any of the required applications run on it?  Is there an environment where its voice operation can be used without affecting others?

Without preparation wearable computing may see the end of BYOD before it manages to reach its maturity.  To protect BYOD its therefore more important now than ever to watch these advances in technology during their infancy and be willing to prepare for the impact they may have in the future.

Wearable Applications

One of the most important, and difficult, areas for people adopting BYOD is making key software and applications available across all platforms.

I’ve already seen people leaving the mobile space because the costs of developing and maintaining separate fairly basic apps for the three major mobile platforms is too high.  Consider how much more costly this would be if we are talking about CRM or ERP systems, or even stock management?  What if wearable computing takes off in 2014, can your systems keep up?

Thanks to the Mvpc design pattern and the Ambidect Technology we use at Ambidect we are already tackling this problem and sharing our technology with others to encourage them to do the same.  By creating future proof applications we enable genuine BYOD environments to spring up everywhere without anyone or any device ever having to be labelled a second class citizen because a key application isn’t available for the platform or works poorly in the platforms web browser.

Google Glass and Mvpc

As part of my experiments with Google Glass I thought I’d be able to give the future proof part of our technology challenge, but I was actually surprised with the ease existing applications could be extended to work with wearable computing.

I started by going through the Google Glass API to get a feel for how applications should be built to feel “native” on the Glass.  Because of the way Google Glass works over the web I was able to reference all the existing .NET framework classes shared between ASP.NET and Windows Desktop as a starting point for the new platform.  This gave us a big head start.

Putting together an IPresenter and an IStartView that served up Timelines and provided menuItems[] allowed navigation through the Command layer within an hour of getting started.  From their it was pretty simple to implement the rest of the standard views and run up a demo project through the emulator a couple more hours and I had the whole demo working nicely in a read only way.

I then ran a couple of existing applications through the Glass to see how they worked, and all of them were usable, although extra attention would be needed to break down the amount of information on each page in the timeline, and thought given to how to best edit records, if we were considering using the Glass in a genuine production environment.

So Should I Get One?

Now I’m not expecting everybody to run out and get their own pair of Google Glass, but I do think they provide an interesting look into where the future of “personal” computers may be.. They also provided an exciting test for the Mvpc design pattern and libraries.  If nothing else the Glass would provide you with a bit of fun pretending to be the terminator for a few days, even if you should be using your business app we made run on it!

Installing the Mvpc Visual Studio Add-in and Nuget Feed

Getting Involved with Mvpc

The Mvpc reference libraries used by all developers in Ambidect are also available to invited partners and developers. If you are interested in working with the Mvpc libraries yourself or within your company drop me an email and we’ll see if we can get you building cross-platform multi-device applications too.

Using the Installer

When you are invited to join the Mvpc programme you will be given a unique user name and password which will include access to download the Mvpc installer for Visual Studio.

Installation is straight forward and can be started by running the downloaded executable “Mvpc for Visual Studio Developers Setup.exe” and following the on-screen prompts.


The installer will automatically install itself into the right place depending on if your machine has a 32bit or 64bit operating system so you can just work your way through the screens of the installer without changing any defaults.

What’s Included

Once you have completed the installation you will have a number of things now added to your computer.

  1. A Visual Studio add-in to make working with Mvpc within Visual Studio simple, and for managing your Ambidect developer account.
  2. Rapid Prototyping tools integrated directly into Visual Studio.
  3. A secure connection to the Mvpc NuGet feed for easy package management and updates.
  4. Project Templates for creating Mvpc modules and applications.
  5. Item templates for creating specialist Mvpc classes and controls.

Registering your Developer Account and Authorising your PC

Before you can use the Mvpc libraries and plugin you will need to register your developer account.  You will be prompted to do this the first time you start Visual Studio after installation, or you can register or modify your details by selecting “Ambidect Developer Account…” from the Tools menu under Visual Studio.

Tools Menu

When registering you will be prompted to supply your email address and password.  These will have been sent to you in your invite.


When you click “Authorise this PC for Development” the current PC will be added the list of PCs associated to your account and will be activated to allow you to use it for development.  There is no limit to the number of PCs you can authorise against your account.

Rapid Prototyping Tools

The rapid prototyping tools are included within the project templates for Mvpc when you create new projects, or can be added to existing projects via the “Add New Item…” option.

All rapid prototyping items are included under the Rapid Prototyping sub-folder of the Mvpc group within the “Add New Item…” dialog.


I’ll do an article on Rapid Prototyping soon but its usage should be very straight forward once you’ve added the item templates and get going.

Nuget Feed

As part of installation and registration a secure NuGet feed will be added to your account and appear in Visual Studio.  The Mvpc NuGet feed allows you take advantage of all the benefits of NuGet while keeping the packages and updates secure and only accessible to registered developers.


The NuGet feed contains both individual packages, and meta packages to give you as much control or automation as you need on a project by project basis.

The packages are updated regularly and carefully maintained for backwards compatibility so you can safely update individual packages, or all packages across your solution directly from within Visual Studio and NuGet.

Project Templates

Project templates are included and integrated into Visual Studio for each of the common module types and for creating the native parts of an application for each platform if you are doing this yourself.  Some developers choose to use the Ambidect Cross Build-it service included in their developer account rather than creating the native parts of the application themselves.  You can use either the project templates or the service; or even mix and match the two for different platforms or projects.

When invoking “Add Project…” from within Visual Studio you will find all the Mvpc projects templates under the “Mvpc 1.0” group.


Item Templates

When you want to add individual classes or other items to a project you are working on you’ll find the Mvpc specific items in the “Add New Items…” dialog also under the “Mvpc 1.0” group.  To help you find what you want quickly we have also subdivided the group into sub-folders based on the type of item you want to add.


You don’t have to limit yourself to the item templates within the Mvpc group and can happily mix these items alongside classes and items of any other type within a project.

What to Do Next

If you’ve followed the instructions above you will now have your development PC setup and registered ready for you to create your own cross-platform multi-device software.  To get started we recommend you try out some of the next project types and the rapid prototyping functionality and you will quickly have applications running on the multiple devices.

In the future I’ll do some specific walkthroughs for creating different types of modules and applications.  But for now you should find things pretty straightforward as you start to write programs that work everywhere and on everything for the first time.

Async extension method wrappers

Asynchronous APIs are becoming more popular thanks in part to the focus on asynchronous user interface design requirements on platforms such as Windows Store Applications for Windows 8 and Windows RT.

This attempt to change the way developers think about long or unpredictable operations is welcome and necessary as databases and files slowly migrate into the cloud.

Unfortunately System.Threading.Tasks and the async and await keywords are not available inside portable class libraries or some of the platforms we target with the Ambidect technology.  We could choose to use an alternative style of asynchronous API, such as call backs, but these are starting to look dated, and require the developer to do a lot more boilerplate work to use.

At this point you may be tempted to give up and provide only an blocking synchronous API and require the developer to manage their own threads on each platform that insists on asynchronous calls; but as I touched on in my previous post about the repository API, we felt it was a much better idea to provide an async API and did so using extension methods.

This technique can be used to wrap almost any synchronous API; but I have to stress at this stage it should only be used if the platform or platforms you are working on do not have a usable native asynchronous API.

For our example we’ll work with the Find method of the IRepository<> interface, but the principles here will work with any synchronous call that needs to be wrapped.

The first thing we need to do is create class to host our extension methods:

using System;
using System.Threading.Tasks;

namespace Mvpc
    public static class IRepositoryExensions_Async
        // TODO this is where to put your extension method code.

If you are not familiar with extension methods, the reason we mark the class as static is because its a requirement of the extension method support of the compiler.  The name of the class can be anything you want, but you can see that I use a simple convention that makes it clear to anybody using the library that the class contains extension methods, so is of no interest to be used directly.

You will note that the class here has been put into the Mvpc namespace to go alongside the class we are wrapping.  When using extension methods that extend the API with asynchronous members this is my recommended approach.  It stops the developer of the class worrying about how the async methods are provided, and intilisense will include them in the list of available class members when working on a platform that supports our asynchronous API.

In cases where the extension methods provide utility functions rather than a core API for a class, it is good practice to keep your extension methods in a separate namespace, e.g. Mvpc.Extensions.  This stops the intilisense list being over-populated with extension methods that are not relevant to the code at hand.  When extending one of the CLRs core types such as object or string I always insist that the extension method goes into a namespace ending in “Extensions” such as Mvpc.Extensions that the developer has to explicitly opt into.  This not only helps keeps the intilisense clean, but also stops accidental dependencies on the specific extension methods creaping into code blocks where they don’t belong.

Now we have a class setup and have decided the right namespace for the class lets add an extension method.  An extension method is exactly the same as a normal static method, except its first parameter is prefixed with the “this” keyword.  This instruction tells the compiler that it can effectively rewrite the extension method call into a static method call, while allowing the developer using the extension method to use a more natural calling convention.  For example instead of having to call:

var repository = ...;
var key = Guid.NewGuid();
IRepositoryBaseExtensions_Async.FindAsync(repository, key);

We can use the much more readable:

var repository = ...;
var key = Guid.NewGuid();

Lets have a look at the code for the FindAsync() extension method itself now:

        public async static Task FindAsync(this IRepository repository, params Guid[] keys)
            var task = System.Threading.Tasks.Task.Factory.StartNew(() =>
                var ret = repository.Find(keys);
                return ret;

            return await task;

The first thing you will note is that we suffix the name of the method with “Async” I find this a very good convention to follow for any method that provides an async API that can have “await” applied to it. It helps the person using the method remember that at some point they will likely want to await on the result.

If you are unfamiliar with the async and await keywords I suggest you have a look at them in the MSDN documentation sufficient to say here that you use async to mark a method as containing asynchronous code, and await to safely wait for the result of an async method before continuing.

As well as the async keyword you will notice the method is also marked as static as it will operate without a class instance and the first parameter is prefixed with “this” keyword to enable the extension method style shorthand call to the method.  We can still call the static method directly if we want, but without the “this” keyword the extension method syntax would not be available when calling the method.

Inside the method we create a new Task with the right return type and use an lambda expression to perform a call to the synchronous API we are wrapping.  This code will be executed in a separate thread before returning its result.  Exactly when the code pauses to wait for the result depends on how we use await when calling the extension method.

More often that not when we will want the code to wait for the value before continuing so we will use await directly on the async call as follows:

var repository = ...;
var key = Guid.NewGuid();
var item = await repository.FindAsync(key);

In this post we’ve wrapped a synchronous call to a repository function with an async extension method, but you can use the technique whenever you find you need to make regular asynchronous calls to a class that couldn’t be built with an asynchronous API, or to which you do not have access to the source to extend with a native asynchronous API yourself.

Mvpc – The View Layer in more Detail

This is the second post in the series of covering each layer of Mvpc.  This article covers the View Layer.  You can find the previous post on the Model Layer here.

View Layer Reponsiblities

Within Mvpc the view layer has very specific responsibilities.  In particular the view layer is responsible for:

  1. Displaying and formatting data from models to be shown to the end user.
  2. Interacting with the user through the GUI methods (excluding commands).
  3. Internationalisation and multi-language support where applicable.
  4. Making best use of the specialist hardware available on a device.

In design patterns that derive from the original MVC pattern (we’re not talking about the specific ASP.NET MVC libraries here but the design pattern itself) the view is often seen as the most GUI dependent layer.  This is true of the Mvpc design pattern too; but we have take special steps to take GUI and platform dependence down to a minimum while maximising the portability of any code you write.

Base Types for Views

When we designed the view layer we wanted to keep the class structure as flexible as possible.  As with the model layer we therefore use System.Object as the base class, rather than insisting on a specific base class type or interface implementation.

Making this decision allows the GUI portion of the view layer to use the base control types from the GUI toolkit being used, and with which you will already be familiar.  For example on the Windows Forms platform System.Windows.Forms.Control can be used as the base type.  On ASP.NET System.Mvc.Controller.  On Windows Store Windows.UI.Xaml.Controls.FrameworkElement, and so on.

Portable Views

We wouldn’t be able to write very portable code if we could only reference views within platform specific code; so under the toolkit specific views we have introduced the concept of a view based entirely on interfaces.

When creating views for a model by hand or using rapid prototyping each model usually has three view interfaces generated by default:

  1. ICreateView
  2. IEditView
  3. IListView

By convention these views exist in a namespace within the module matching the model’s name, but in plural form rather than singular.  For example if we hade a model class “My.Module.CRM.Models.Customer” then the full name of the three interfaces would be:

  1. My.Modules.CRM.Customers.ICreateView
  2. My.Modules.CRM.Customers.IEditView
  3. My.Modules.CRM.Customers.IListView

By convention we would also place all commands relating to a model under the same namespace with a .Commands appended to the namespace.  So in the same example we would use the namespace “My.Modules.CRM.Customers.Commands”.

This convention has been designed to keep all code relating to the business and application logic for a particular model within a single namespace.  Doing this keeps our code short and readable as the .NET compiler will search the current namespace, and then each parent namespace for a matching class or interface name when a short name is given.  So within a command we can simply reference IEditView rather than the full name, and we will be confident that the code will always reference the right IEditView.

An additional benefit this brings is what some refer to as static API inheritance.  That is to say that if the same code was lifted from one namespace, and placed into another namespace, most of the time the code would not need to be modified to refer to the new classes.  When learning the Mvpc libraries this represents a big advantage, as you get used to seeing the same class names over and over again and so quickly become familiar with their API and purpose even though you are technically dealing with different classes and interfaces.

As well as the conventional three views, the user can specify additional view by simply specifying a new interface.  By convention interfaces that represent portable views are prefiex with “I” and suffiexed with “View”, for example: IWebBrowserView or IMultipleEditView.

What is perhaps most interesting about the portable view layer is the definition of all the interfaces involved are left as empty interfaces.

Although you could specify specific methods or properties available in the interface, we strongly recommend that you don’t.

Its probably worth me explaining that decision before we continue.  When you write a view in a type safe language you will always want to write a view control to be as type safe as possible.  We recommend this too.  This means if you had a view that worked on a model of type Customer you would end up with code similar to this on the Windows Forms platform:

    public partial class EditView : Control
        public EditView()

        public Models.Customer Model { get; set; }

        private void EditView_SomeEvent(object sender, EventArgs e)
            // We can directly access Model as a Models.Customer here due to type safety.

While this is good code and easy for the developer, if we put Model into the interface like so:

    public interface IEditView
        public Models.Customer Model { get; set; }

We are actually forcing all view implementations to follow this convention rather than leaving it as best practice.  Furthermore doing this would actually prevent somebody writing a view that can take a model of any type by accepting a System.Object for Model unless both the GUI view and the portable view specified it.  To me this is too nflexible and goes against the idea of keeping dependencies as few as possible within each code block and separating concerns.  As you will see later when we talk about the Standard Views included in the reference Mvpc libraries implementation, it would also prevent us from doing rapid prototyping effectively.

That’s why we keep our portable interfaces simple and empty instead:

    public interface IEditView

Creating View Instances

When working within a command or other code block we reference views by their portable view name.  So for example if I was in a command that handled an Edit request, I would want to use IEditView.  This is great for storing a reference to an instated class, but how do we create a new instance of the view, and how does it go on to create the actual GUI view we use?

The answer is that we use dependency injection to overcome these issues.  Specifically we have a specialised service locator called ViewFactory that we use to create all views:

var view = ViewFactory.Default.Resolve<IEditView>();

This code will create an instance of the actual GUI view implementation for IEditView and return it as an IEditView. Most of the time this will be right. However Resolve also comes in an extended form that allows a second generic type argument to be specified to resolve the type as another type. This is most often used to “downgrade” the return value into a System.Object, and is done by all the *CommandBase types included in the Mvpc reference libraries:

var view = ViewFactory.Default.ResolveAs<IEditView, object>();

By downgrading the code like this we are allowing a GUI view to be resolved for interfaces that it isn’t marked with. At first glance this may sound a silly thing to allow; but when you stop and think, its actually essential if we are to create general versions of views as we have within the StandardViews libraries, or to cope with models that are unknown at compile time, but generated at runtime using System.Reflection.Emit or a similar method.

GUI Views for each Platform

When creating the actual GUI for a view we finally put you into platform specific code.  Our recommendation is that when you write platform specific code you use the platform’s native GUI toolkit.  For Windows Phone this would be XAML, for Web it would be a Razor or traditional view file with controller. it would be the System.Windows.Forms namespace; and so on.

On each platform we do provide a set of useful controls in the Mvpc.GuiToolkit libraries.  However these controls are designed to complement the controls on the native platform, not to replace them and not as a cross platform GUI toolkit.

This decision stands out against the many cross-platform GUI toolkits that have come and gone.  As I covered in my introduction to Mvpc there are a lot of pros and cons with a cross-platform GUI toolkit, but by far the biggest con is that the application just doesn’t feel native alongside over applications running on the platform.  Insisting you use a cross-platform GUI toolkit would also prevent you using any existing 3rd party or open sources libraries for the platforms, all of which we believe would be forcing you to have to learn a new way of building GUIs, rather than maximising the skills you already have.

When designing the GUI for a platform we recommend that you also follow all conventions of the GUI toolkit you are using.  This means if best practice on your platform is to use a view model and the MVVM technique as with Windows Phone, do so.  If best practice is to store your GUI in resource files and load them from their, as with Android, do so.  If data binding is available for the platform as with XAML and Windows.Forms, then use it.

When designing the Mvpc design pattern we were very careful to keep compatibility with the MVC, MVP, and MVVM design patterns for the view layer.  This means that accept for decorating your GUI class with an attribute to link it to the portable view layer, you can ignore the fact you are even working with Mvpc if you want.  Even down to using core framework controls as simple views if you want.

The ViewFor attribute

Lets have a look at the attribute we need to use to link the GUI view to its portable interface; you saw it in the example before but we’ll take a closer look at it here:

    public partial class EditView : EditViewBase, IEditView

You can see from this example that you simply need to mark a view with the ViewFor attribute specifying the interface type for the portable view it represents and the ViewFactory takes care of the rest for us. By convention we also make the class implement the (empty) portable view interface. This is considered best practice, but is optional for the reasons already discussed.

It is possible to mark a single GUI view as fulfilling multiple portable views. For example it is often sensible to share a single GUI view for both editing and creating new items as shown in this example code:

    public partial class EditView : EditViewBase, IEditView, ICreateView

The ViewFor attribute is provided by the class ViewForAttribute and alongside the ViewFactory handles the dependency injection used by the view layer. It shares a common API with the attributes used to mark-up repositories, commands, and other dependencies. In addition to specifying the portable view type you can also specify a priority that is used if more than one class is found that implements a particular portable view type:

        Priority = 20
    public partial class EditView : EditViewBase, IEditView

When resolving views or any dependency the highest priority is the lowest number, i.e. 1 is considered a higher priority and therefore used over a priority of 300. If you don’t specify a priority the priority of 10,000 is automatically used.

By convention all the dependencies, including views, that are included in the reference Mvpc libraries have priorities close to Int32.Max. This means you can omit the Priorty from the attribute most of the time, and still know your class will be resolved with a higher priority than any included in the Mvpc libraries.

Before moving on from the attributes its worth making mention of an additional attribute for marking views only available on the ASP.NET platform.

By convention on the ASP.NET platform a Controller controls many Views with each method of a Controller representing an Action for GET or POST and having a related GUI view stored in a .cshtml or .aspx file.  To allow you to follow this recommended practice for the platform; when creating a ASP.NET GUI view we use an expanded attribute to allow you to specify the action within the controller for each supported portable view interface type, e.g.:

        Action = "Index"
        Action = "Edit"
        Action = "Create"
    public class CustomerController : ViewController

The ViewHelper

I mentioned above the reason why we don’t specify a definition of a Model property in the interfaces that make up the portable view layer.  What I didn’t go on to do is expand on how we set the model in the view then.

Views actually have three optional properties that should be considered part of the portable view API when provided:

  1. Model
  2. Repository
  3. Commands

When working with portable views in a command or code block we will often want to set these properties.  We can do this using the static ViewHelper class.  This class gives us access to these properties for any object, and knows how to handle conversion of types to match the model type, and missing properties if the GUI doesn’t supply a property to store the value being set.

The methods come in Get/Set pairs and are respectively:

  1. GetModel() and SetModel()
  2. GetRepository() and SetRepository()
  3. GetCommands() and SetCommands()

We’ll cover off GetCommands() and SetCommands() in more detail when we look at the Command layer.  A real world example for using ViewHelper with the model and repository is in the code here:

var view = ViewFactory.Default.ResolveAs();
var repository = RepositoryFactory.Default.Resolve();

// Create the right model.
var model = repository.CreateUntyped();

// Set the model.
ViewHelper.SetRepository(view, repository);
ViewHelper.SetModel(view, model);

You can see from this example that usage is really straight forward and easy to understand and remember.

Standard Views

Now you know how to create custom GUIs for each platform, let me see if I can convince you to use your new knowledge sparingly.

Each time you create a custom GUI view for a platform, you are increasing the amount of platform specific code you will need to maintain.  If you had build a custom GUI view for all platforms this would start to spiral out of control, and those 90%-100% portable code rates would start to plummet.  There are certainly times to build custom platform specific views, but we want you to only do it when you have a special reason.  Otherwise we’ve tried to provide you with a rapidly prototyped GUI for every platform that should work for you in many cases.

These rapid prototype GUIs are found in the Mvpc.StandardViews libraries within the reference implementation.  In here there is an implementation of an Edit, Create, and List view for any model type specialised for all platforms.  There is also an IStartView for the initial screen and IWebBrowserView for commands that want to navigate or display HTML or other web based content.

These standard views are very flexible, and when combined with the GuiHintsAttribute and model metadata as described in the previous post in the series its possible to take a lot of control over how, where, and when these views show model data.  Its also worth pointing out that most of the same GUI hints are applied to your own custom made GUI views too.  We’ll do a special article on model metadata and GuiHints in the future.

We won’t go into the standard views in too much detail in this post, as it is already long.  I’ll also do a post on the standard views and how to get the most from them after this series is finished.  If you want to see the standard views in action however just put together a module without any specific GUI views and make sure you leave rapid prototyping turned on.  When you run the application all the GUIs that you see are using these standard views.  Check them on several platforms and you will see that they provide consistent access to data and functionality, yet follow the recommended GUI guidelines for each platform too.

Most applications end up a mixture of these standard views, and custom views.  This is a sensible approach, and consistent with the idea of maximising the amount of portable code compared to platform specific code that you write, while still providing a rich native experience.

In some situations you may be happy with the standard views for most platforms, but perhaps want to enhance on a specific platform to make use of a device specific feature such GPS location, scanner, or web camera for example.  In these cases you can provide custom GUI views for only the platforms you need, safe in the knowledge the standard views will work fine on the other platforms.


We went to great lengths with the Mvpc design pattern and reference libraries to re-think the way views and GUIs are built by developers compared to other design patterns.  Our primary focus has been to maximise the amount of fully portable code that can be written, while still allowing developers to use their existing skills, libraries, and best practice on each platform when they do choose to write custom GUIs.

You will have also started to see how a mixture of specialised and standard GUIs can be used to build applications that run natively and feel natively taking full advantage of a devices capabilities, while still keeping the vast majority of your code portable and shared across all platforms.

Next in the series we will take a look at the Presenter layer.

Mvpc – The Model Layer in more Detail

Overview of the Model Layer

The Mvpc design pattern is based on four layers:

  1. Model
  2. View
  3. Presenter
  4. Command

This article is part of a series covering each layer of the Mvpc design pattern in detail.  This post covers the Model layer.

The model layer itself takes on four core responsibilities that are exclusive to itself:

  1. The representation of data as models.
  2. The storage and retrieval of data into models via repositories.
  3. The conversion of models to and from different shapes.
  4. The mark-up of metadata against models to provide hints to users of a model.

The representation of data as models

When designing Mvpc we took the decision to define a model in as broad terms as possible.  We wanted to make sure anything a programmer would generally call data could be treated as a model.  For this reason we decided that a model would not be based on any existing framework or data modelling tool, and would not require the sub classing of a specific base type or implementation of an specific interface.  Instead we decided that a model for the framework would be a property bag; that is to say a collection of name and value pairs, where the type of the value is usually known.  In practice this means developers are able to see simple code classes as models, as well as more complex data structures.

Using a property bag as a model gives us a lot of flexibility; and means we are compatabile with models generated by almost any existing or future data abstraction or framework such as datasets, the entity framework, POCO, NHibinate,, IDataReader, IDictionary<,>, properties defined as simple code classes and structs, and anything else you can imagine.

Using this technique means that the Mvpc pattern and related libraries do not lock a developer into a particular choice of data abstraction or database engine; but allows developers to continue to use the data abstraction layers they are already used to and familiar with.

The storage and retrieval of data into models via repositories

Once we had decided upon a representation of a model, we needed a way to keep the developers choice of data abstraction separate to the model and the use of a model at point of consumption in a command, view, or code block.  We decided to do that by abstracting the retrieval and storage of data through a simple, well defined repository API (covered in this section), and by allowing a model to change shape to better fit the context of its use (covered in the next section).

The repository is completely responsible for the creation, storage, and retrieval of models.  It is responsible work directly with a data store, and to return data in the shape most applicable for the executing code to use it.

Unlike for the model itself we do insist on a simple base interface for a repository: IReposiitoryBase.

The IRepositoryBase interface exposes type unsafe methods for the four basic data management operations:

  1. Find
  2. Create
  3. Update
  4. Delete

In addition it supports two key query methods for working with ad-hoc queries:

  1. FindAll
  2. FindFirstOrDefault

Then based on over a decades experience of working with data abstraction models and data processing applications we also decided to build into the base two specific “predefined” query methods that would serve a similar purpose to server side views in an SQL database.  These methods are:

  1. GetDisplayList
  2. GetSelectionList

Both of these methods accept an optional query name as a parameter, so the number of queries then can support is unlimited; and importantly the return type is not expected to be the same type returned by the data operations and ad-hoc query methods, and will usually contain data that is not directly stored in the model.  For example a repository for a Sales Order will return or take a SalesOrder model or IEnumerable<SalesOrder> for the data operations but depending on the query name you pass in could return a list of sales order details including a customer names and addresses that do not make up part of the model; or a summed result of total sales broken down by month and year.  For this reason the query methods are both defined as simply returning a System.Collections.IEnumerable to support any collection type at all.

As many of you will know there are pros and cons to making a repository completely type-safe, or independent of type.  We won’t go into them in detail here, but it can generally be summarised that type safe repositories as easy for an end user to work with, and for developers to specialise, but type unsafe repositories can make code more reusable and cope with types that didn’t exist at compile time.

In the end we decided that neither approach alone met our design goals completely so we decided to provide both APIs and let the methods using the repository use the one it found most useful.

Therefore we provide an interface IRepositoryBase that exposes only type unsafe versions of methods.  These are easily recognisable in the Mvpc libraries as their names all end in “Untyped” e.g.:

object CreateUntyped()
bool UpdateUntyped(object)

We then provide an interface IRepository<T> that exposes only type safe versions of the methods type safe to the generic parameter T.  These are generally the more convienent methods to use, so we’ve left these with the simple names with no suffix, e.g.:

T Create()
bool Update(T)

Its worth just pointing out at this point that as the GetDisplayList() and GetSelectionList() methods do not return items match the repositories model type, they are included in IRepositoryBase and do not have explicitly named Untyped() versions.

The final piece in the dual API design was to make it possible for developer consuming a repository to choose either API, but for the developer creating the repository implement only a single API.

To achieve this we introduced an abstract base class RepositoryBase that by default linked the “Untyped” methods to calls to the type safe methods so programmers could write type safe repositories and have the unsafe APIs automatically provided.

While that approach worked well for people writing a specific repository, it explicitly links a repository implementation to a realised class at compile time as not all platforms can support dynamic code execution or the System.Reflection.Emit namespace.

We still didn’t feel this was good enough, and so we provided a set of utility methods that allowed a repository designer to write a repository that only found out its model type at run time, but could still handle the type safe calls by exposing a proxy class to allow those calls to be made.

Because of this we were then able to bundle a number of commonly used options for repositories into the libraries creating a powerful starting point for applications using the libraries.  To avoid bloat and unwanted references to 3rd party libraries, we made each available as a separate package on Nuget so you only need to reference the types you want to use in your application or module.

For each standard repository types we provided two sets of classes:

  1. A type safe generic base class that provides full implementation of the API and allows extension of the API by a developer via subclassing.
  2. A fully dynamic repository that will adapt itself to any model type provided at run time.

The first option is self explanatory and works through normal sub classing and overriding of methods.  The second option however allows us to do particularly interesting things like: rapid prototying repositories for any Entity Framework .edmx file; switching or adding optional support for a new database engine or web service without having to redevelop or even recompile any of the existing code; or providing Json wrappers around any other repository type.

Here is a list of the supported data abstractions that have full support for repositories in the reference Mvpc libraries:

  1. Entity Framwork 5
  2. Entity Framework 4
  3. ADO.NET (System.Data)
  4. POCO
  5. Code Classes via System.Reflection
  6. XML files
  7. JSON Web Services
  8. JSON Database (Mvpc.JsonDatabase – a local database that can work on any device and uses json to store its data)
  9. JsonRepository webservice (a web service that will wrap any other repository and provide secure access to it via a Json based web service).

The JsonRepository webservice is an important option for us to include as not all data abstraction frameworks work on all platforms.  For example ADO.NET or the Microsoft Entity Framework can’t be accessed directly from Windows Store applications or Windows Phones.  Rather than saying “you can’t support these platforms if you use EF5 for data abstraction” we instead provide a built in wrapper that means you can.  And because of the simple design the same wrapper can be used if you build a repository that uses NHibinate or other simpler tool.

The final note to make about the repository API is that when we designed it we always wanted to support both synchronous and asynchronous operation.  Therefore on the platforms that support asynchronous events we have provided an Async version of each of the typed and untyped repository APIs via extension methods. e.g.:

T CreateAsync()
bool UpdateAsync(T)
object CreateUntypedAsync()
bool UpdateUntypedAsync(object)

This means if you are building a custom GUI component to enhance a platform you can use the Async() apis to keep your user interface responsive ,while carrying out repository access that might be accessing remote servers or other potentially slow resources.

As the Async API are provided via extension methods repository designers do not need to implement their own async versions of the repository and can usually ignore the fact their repository will every be accessed asynchronously.  This keeps back with our design principle that a repository should be designed as if there was a single API, but should allow use by the right API for the task at hand.

The Async API is completely compatible with the C# 5 async and await keywords, so they couldn’t be easier to use.  As these keywords are still new to a lot of people we’ll cover them off in a future post.

Repositories are managed via the RepositoryFactory (which implements IRepositoryFactory and is determined via dependency injection) which also manages their connection strings.  This means again that both the user of a repository, and the designer of the repository, do not need to worry about how the connection to a specific database or data store is configured, but can leave that up to the factory to look after this on their behalf.

The conversion of models to and from different shapes

Within a repository it is not unusual to need to take data that’s represented in one class, and convert it into another class.  To give a concrete example, the data may be loaded by the entity framework into a proxy class, but we want to convert that into our more portable model definition which is a plain old code object.

To do this we provide a Model Converter using an IModelConverter interface.  This converter is capable of receiving a model in any format (a Dictionary, a code class, etc) and mapping it across to a new format or type. Later when we need it we can use the same converter to to provide the reverse conversion.

As well as the IModelConverter interface a concrete default implementation is provided called simply ModelConverter.  This default implementation will be used unless an alterative is supplied using dependency injection.

The model converter is most useful within repositories, but is also used elsewhere in the framework, and can be used in your own code.

If you had a method that could operate on any class so long as it had a Boolean “Live” property you can define that “shape” for the model, and use the model converter to convert any compatible model into the shape for you to use, and then back again.

The command layer for example uses the model converter like this when working out how to supply views or models into a command in the compatible type for the command to operate on.  We’ll cover the command layer in detail in the fourth post in this series.

The model converter is also responsible for changing the shape of a model between formats.  For example if we receive a Json representation of a model from a web service the model converter can turn that into a class for us.  When the time comes to pass it back over the web service, we can simply convert it back.  This is actually exactly how the JsonRepository webservice repository works to extend repository support to platforms it wouldn’t otherwise reach.

Because of the shape changing ability of models, and the flexibility of the built in repositories, we are able to use the Rapid Prototyping techniques we’ve pioneered to generate models in a portable shape from Entity Framwork .edmx files or other data sources, to make sure you never have duplicated code that needs to be maintained.  We’ll have a series of articles on using Rapid Prototyping with Mvpc in the future.

The mark-up of metadata against models to provide hints to users of a model

For most purposes a model that simply contains data is all we need.  But there are times when a name, value, and type don’t quite go far enough.  The most obvious example is when showing data on screen.  If a property in a model represents a foreign key linking relational data to another repository, do we really want to force the GUI layer for each platform to have to customise the GUI just to provide this link as a selection list or combobox?  Wouldn’t it be better to mark the property with the repository needed for a selection list, and then let the GUI sort itself out based on this information?

Metadata is used to identify primary keys, display labels, repositories for foreign keys, required fields, special formatting requirements, and a host of other useful things.

In the Mvpc libraries the metadata API consists of an IModelMetadata interface, along with a concrete implementation that has some hard coded defaults based on attributes and recommended Microsoft naming conventions.  This can be overridden using dependency injection for any individual model, or for all models.

A good API shouldn’t require sub classing or overriding too often, so the default implementation also provides the GuiHintAttribute that can be used to mark-up any property of a model to change the metadata for a property without having to create our own IModelMetadata classes.  These are simple to use and would often look like:

            Required = true
        public string Name { get; set; }

            Required = true,
            Repository = typeof(Customers.IRepository),
            UseMultiColumnDropDown = true
        public Guid CustomerFK { get; set; }

These attributes can either be put directly on a model, or in a separate class linked with the MetadataForAttribute.

Personally I have a strong preference for using the MetadataForAttribute as it not only keeps the metadata separate to the model definition, but also allows us to use the same metadata for multiple models if required. For example here is the code for the most common case where the same metadata file is shared between a model, and its repositories, so the metadata applies both when editing the model, and when displaying a the results of GetDisplayList(), which you will recall aren’t actually instances of the model itself:

    public class Metadata
            Required = true
        public object Name { get; set; }

            Required = true,
            Repository = typeof(Customers.IRepository),
            UseMultiColumnDropDown = true
        public object CustomerFK { get; set; }

            ListGroupLevel = 1
        public object Customer { get; set; }

Its worth noting in the above code that the return type of all the properties in the metadata class is “object”.  This is by convention when writing metadata classes for Mvpc because the model class itself already exposes the type so repeating it here would only increase our maintenance costs or at the very least be misleading should we choose to change the type of one of the properties in the future.

We could set the return type of properties in metadata only classes to anything as they are never actually used, and the classes are never instantiated.  Setting them to anything except the real type, or object, could mislead people reading your code however, so its considered best good practice to follow the DRY (don’t repeat yourself) principle here and always use “object”.


When designing the Model layer of the Mvpc design pattern and its commercially available reference library we wanted to make sure we didn’t restrict developers from choosing any of the very good data abstraction platforms already available.  We did this by choosing a very simple definition of a model as a property bag.

To provide maximum flexibility on how the data was then used we allowed the data to change shape and pass between web services, databases, files, and commands in the shape that best suited the code being written in each of those areas, and in a way seamless to the developer consuming the model.

A flexible repository API with a lot of standard implementations and full rapid prototyping support allows us to expose a small and standard API for data operations ensuing the code we write never becomes dependent on a particular database or data storage engine or style.

Finally by allow metadata to be attached to models we are able to give hints to the GUI and other code on how to make the most of the model; allowing us to minimise the platform specific code needed for a model to be as small as possible.  When we need to make changes to model metadata in the future we can carry out the change in a single place and have that change reflected across all platforms.

Hope this detailed overview has given you a good understanding of the model layer in the Mvpc design pattern and libraries.  The next post in the series should be out next week and will look at the view layer in similar detail.

What is Mvpc and Where did it Come From?

Introducing Mvpc

Mvpc (Model View Presenter Command) is a new design pattern and reference library for designing and building software applications and mobile apps.  The design pattern has been in circulation for a little while, but its true impact is only beginning to be understood and as the creator of Mvpc I’ve been asked a number of times to explain its origins, why a new approach was required, and how we arrived at the breakthrough.

Over the coming weeks and months we’ll take a look at specific usage of the pattern and each of it layers and techniques in detail; but this post is designed to lay out the basics and help you understand where the thinking behind the new pattern came from.

If you want to look at the reference implementation of the technology contact me or anyone at Ambidect.

The Origins of Mvpc – The Multi-device future

Whenever you are faced with a problem you should always look at how others have approached the problem before, and see what lessons can be learnt from these previous attempts, however successful or otherwise.

One of the biggest problems facing the software industry at present is: how do you create software that can run on any device, without having to build separate programs or apps for each platform?

It used to be that everybody expected to do “real work” on a Windows PC.  But now people want to do real work anywhere, and everywhere.  It started with the advent of smart phones first, and then doubled as tablets started to dominate the living room computing experience.  And those changes have had unexpected and unpredictable impacts on business impact with the establishment of BYOD polices and the significant growth in Macs returning to the workplace.

This all means that not only can we now not expect a single type of operating system, but we can’t specify a minimum screen size any more, or even assume a keyboard or mouse are available.  With new devices hitting the market every day has the problem  significantly worse; and if left unaddressed would become a threat to the entire software industry, not just individual companies.

Businesses buying software are left in a position of having to predict which devices they will want to use three or five years from now and tie the success of their business into those platforms.  How can they make an investment decision with such uncertainty?  Can they even find two experts that agree on which devices will still be around ten years from now?

Businesses wanting to reach a large audience are having to spend large amounts of money, often with different developers or development companies, to keep up with the demand of consumers.  With many people now having access to two or three devices of different types and wanting to access your service on whichever one happens to be closest to them right now.

Software developers and companies are being forced to specialise in an individual platform to be able to hire and train staff efficiently enough to keep ahead of a demanding changing environment.  This gives a growth to a growing number of small development companies specialising only in one platform. iPhone App development companies, Web Development companies, Linux Developers, Windows Phone Specialists and the list goes on.

This proliferation means businesses are being forced to repeat the same work with multiple companies at great expense.  And just when the development feels complete, the combined quotes for the first major enhancement brings the eye-watering reality of the unsustainability of the approach.

Despite all of this, us as users are constantly expecting more and more from each device we carry.  We are frustrated when our phone can’t open an attachment to an email because the program isn’t available.  We feel restricted when we sit at a Mac and try to do our accounts.  We get asked “can you put this on my iPad” and have to spend hours trying to explain why its not possible because the great application just doesn’t work on iOS yet.

Mvpc is designed to solve this.  Its designed to change the way people look at software development.  As its principles get adopted across the software development industry it will allow one development to be undertaken that does run on all platforms.  It will reduce development costs without reducing your potential market.

Most importantly of all, it will put the power back in the end user’s hands.  When you find the right piece of software, you won’t even ask if it will work on your new phone, tablet, friends PC, or at an internet café in an airport; because in our vision of the future everything will work everywhere.

We genuinely believe that one day all software will be developed using the principles that drove us to put together the Mvpc design pattern.  I hope we’ll look back to the days of awkward single-platform development with same nostalgia, some look back at Assembler, Smalltalk, and Windows 3.1.  Each was a significant in the way it got us to the next page of the technical future, and we are about to turn another page to a world where software is independent of device.  A future where users make the choice on how, when, and where to use a service, not developers, and not budgets.

What Others have Tried and Failed

People have been trying to solve the problem for years, even before we all got so many devices around us.  All these attempts I have come across however can be grouped together under three broad categories:

  1. Creating applications that execute on a server and use a web browser to connect and view information.
  2. Providing a cross-platform GUI toolkit.
  3. Encouraging libraries to be built with reusable methods, and then trying to keep the platform dependent code as small as possible.

Each of these approaches have a lot of good in them, and can be useful.  But on their own they provide significant drawbacks.  I’ll address a few of the pros and cons of each here briefly, but in time I’ll add individual posts that go into the details of the points made here:

Creating browser based applications

Responive Web Design is becoming a very popular buzzword in the technical and marketing industries right now with a growing number of browser applications that are appearing in the SaaS arena but the idea of solving the problems of platform independence with a web browser predates even the start of “the end” of Microsoft’s monopoly on the devices we use.  Going back five to ten years, many IT departments decided to adopt a “browser only” policies to introducing new software.  What has this experience taught us?  How does this mean response web design if left alone will manage?


  • Existing skills can be used to create screens that are easier to use on different screen sizes.
  • Browser compatibility can be checked with the new software, rather than checking OS compatibility.
  • Can be used from any PC or device with a reliable internet connection.


  • Application can’t be used and data loss can result if connection to the internet is interrupted or unavailable.
  • As the number of users grow server farms and load balancing are required to handle the growing demands as all processing takes place on the server.
  • Developers are required to learn to work with an ever increasing complexity of pre-built scripts, techniques, languages, and browser oddities, to try and make the user experience as rich as possible over the limited stateless request-response drive technology.
  • Applications do not feel native on any platform.
  • Applications have to be tested on all browser platforms and version before release which can be costly.  More often they tend to end up branded with “Best viewed in Internet Explorer 8 or above” or “For best results use Firefox” as each browser comes with its own set of limitations and problems.  At its most extreme I’ve seen whole IT departments unable to upgrade from IE4 or IE6 because its the only browser all the historical application work reliably under.  This also limits the ability to adopt new applications that understandably require newer browser versions.

Providing a cross-platform GUI toolkit

For people not satisfied with the limitations of a web browser, the primary approach has then been to look at  creating a common API across all platforms.  Sometimes this is done by creating a whole environment and GUI toolkit, as was done with Java and swing, others have focused primarily on the GUI such as Qt and Gtk+ only later extending to non-GUI operations.

To date these tend to focus on the desktop rather than mobiles, although some do exist for mobiles.  At their heart they often require the learning of a specific object model, and the creation of code that is heavily tied into the toolkit via inheritance.


  • Application runs natively on the device and does not require constant internet connectivity.
  • Application works identically on each platform with all features available.
  • The GUI toolkits tend to be quite larges and provide a good selection of controls to build custom interfaces they way you want them.


  • Application often doesn’t feel native on any platform or feels built to other platforms GUI guidelines (what platform does GIMP feel native on for example?)
  • Cross platform experience works across the desktop, but doesn’t address tablets, or mobile well.
  • Can’t be used from a friends PC or internet café as most require installation of the application and its toolkit.
  • Developers are restricted in the libraries they can use to provide underlying functionality to avoid introducing a tie back to a single platform again.

Encouragement to keep GUI code minimal

Another way of trying to solve the problem is to accept that the GUI and some base functionality is always going to need to be built separately for each platform.  Teams of developers are encouraged to keep the platform specific code as small as possible and usually need to be organised into groups based on the platforms they can work on.

Its normal that the platform with the biggest initial target market tends to get the majority of developments, with other platforms playing catch up (for example have noticed the difference between using Skype on Windows or on Linux?).  In fact often features simply never make it to these “second class” platforms.

Generally by choosing this approach the decision is made at the start to either target a stateless environment such as the web, or a state-full environment such as a native GUI client.  Once made this decision is hard to reverse.


  • Developers creating a version for a specific platform can use all the tools available for the platform.
  • Applications both look and feel native to the platform they are running on.
  • Code that is able to ignore the GUI and file system altogether can be shared across platforms as a cross-platform library.


  • Development team needed for each platform and sharing of code between platforms is limited.  With mobile becoming increasingly important and competitive, and Macs making a come back in business that means four full development teams are required just to handle Windows, Mac, iPhone/iPad, and Android before you even look at platforms like Windows Phone, Surface, Windows 8 and RT applications, Linux, and blackberry.
  • Usually development takes place against the primary platform, often Windows, or iPhone depending on the nature of the program.  Users using other platforms often feel second class and unimportant and many times migrate to an alternative that targets their platform as its primary platform.
  • Every development and enhancement needs to be performed and tested on every platform making it very expensive to enter the market and keep your product updated.

So why is Mvpc Different?

To understand why Mvpc is different you both have to understand what it does, and also what it doesn’t try to do.

Mvpc Does Not

Mvpc does not try to repeat the mistakes of the past by creating a new library or a specialist way of development that requires full retraining of developers and the throwing away of old code and applications.

Mvpc does not try to compete with the three previous approaches in the hope of having a longer “pro” list and a shorter list of “cons”.

Mvpc Does

Instead with Mvpc what we tried to do is embrace how developers already want to work and how users already want to use their devices; and look instead at the fundamental question of how we understand the software design and development process.

Mvpc actually embraces all three of the above described ideas as part of a successful solution, rather than a past failure.

Applications built for example with Mvpc work both natively and are also available separately in a browser.  This means they are available online and offline.  Giving the best of browser based and native applications.

Not only is the GUI completely responsive in its design.  But the application can automatically make use of platform specific features such as GPS or click to dial capabilities.

By following Mvpc 80 to 95% of the code you write will work across all platforms and all device types out of the box.

As the reference implementation of Mvpc (licensed commercially as Ambidect Technology) is built with C# and compatible with any Microsoft .NET language letting you use the skills you already have in your team without significant retraining.

A cross-platform toolkit libraries is included alongside the Mvpc reference library, giving you the benefits of cross-platform libraries, but without forcing you to use a foreign looking GUI as there are built in native GUIs for each platform that can be expanded using the normal platforms GUI toolkit, not a custom one.

Because of this you will be able to use your existing .NET libraries, and applications and most cases port substantial portions of them to be cross platform.

You can access the full native API safely when you are working on a platform specific enhancement, and know that you will be protected from those APIs when working in the portable code.

Using Rapid Prototyping, as is built into the Mvpc design from the ground up, you will be able to create working applications in days not months, and see how applications work on every platform from day one.

Using our Cross-Build-it service you will be able to produce native clients for every platform from a single code base as well as a separate fully functional web site version.

Because your apps are native you can publish them through the Apple iTunes App Store, Windows Store, Mac Store, and various Androids stores.  This opens your apps for maximum discoverability and the most native feel possible.

Mvpc Embraces Existing Design Patterns

Once developers realise the benefits of Mvpc and want to use it for themselves I often get asked “how does Mvpc compare to MVC” or “I’m used to MVVM and I don’t want to waste all the time I’ve put into learning the pattern and toolkits.”

Many platforms use a GUI toolkit that not only works better with a specific design pattern, but now days require it.  With Mvpc we don’t try to fight against this, and we don’t want to loose or ignore existing skills that you have worked hard to gain.

The Rapid Prototype and native GUIs we provide are all built using the native design pattern of the GUI.  You will also use the toolkit and pattern you are already familiar with for the GUIs you provide for yourself.  This is possible because Mvpc is completely compatible with the MVC, MVP, and MVVM design patterns.  If you are used to working in any of those patterns, you can continue to code pretty much as you have before.  With just a little bit of learning and thinking the extra features of Mvpc become available to you and the transition into fully portable software will be very natural.

As you can imagine at this point the developer talking to me lets out a big sigh of relief knowing Mvpc is not about forgetting what you know, its about learning a new way of positioning the puzzle pieces you already have.

Why is Mvpc Succeeding Where Others Failed?

I believe Mvpc is succeeding because it hasn’t tried to compete with others but views all past approaches as stepping stones that when positioned correctly form part of the best possible solution.

Rather than providing a strange new design pattern and asking you to forget your past learning, we’ve designed the new pattern so it follows all the core principles of the leading design patterns developers already use, but helps position the logic correctly to automatically create portable software.

Rather than joining the decade long argument of the pros and cons of native applications vs Response Web Design and browser based solutions; we claim both sides are right, and ensure your application is available both: natively, and as a responsive web application.

Rather than creating a new language, or asking you to learn an obscure toolkit or pre-processor built on top of C or C++; our reference implementation library adopts Microsoft flagship development environment the .NET Framework.  We give you the choice of language with C#, VB.NET, J#, F#, and managed C++ all supported; and thereby keep all the existing libraries for those languages and platforms available to you.

Rather than asking you to carefully pick from a list of pros and cons and hope your customer or developer agrees, we put the choice back where it belongs: You to develop the way you like to develop.  Your customer uses the software however and on whatever they want to.

If you stop and think about it, we shouldn’t be asking why Mvpc is succeeding; the real question is why has it taken so long for our industry to understand two very simple principles:

The end user should have the choice of what they do with their software, and the developer should also have the choice of how they develop the software.  These two things are not mutuality exclusive as we have been told in the past, but part of a win-win solution.

Second, we shouldn’t complete with failed passed attempts based on their “cons”.  Instead we should combine them based on their “pros” so they can make up for each-others weaknesses.

You see Mvpc, as great as it is, isn’t really some magic new way of doing things.  Its the simple idea letting you make a choice that’s right for you, and making sure your choice doesn’t restrict somebody else’s freedom to choose in the future.