How to Avoid Producing Legacy Code at the Speed of Typing

This blog post provides a recipe on how to avoid producing legacy code at the speed of typing by using a proper architecture and unit testing.


As an enterprise software developer, you are in a constant fight against producing legacy code – code that is no longer worthy of maintenance or support. You are constantly struggling to avoid re-writing stuff repeatedly in a faint hope that next time you will get it just right.

The characteristics of legacy code is, among others, poor design and architecture or dependencies on obsolete frameworks or 3rd party components. Here are a few typical examples that you might recognize:

You and your team produced a nice, feature-rich Windows application. Afterwards, you realize that what was really needed was a browser or mobile application. That is when you recognize the tremendous effort it would take to provide an alternative UI to your application, because you have embedded too much domain functionality within the UI itself.

Another scenario might be that you made a backend that is deeply infiltrated in a particular ORM – such as NHibernate or Entity Framework – or highly dependent on a certain RDBMS. At one point, you want to change backend strategy to avoid ORM and use file-based persistence, but then you realize it is practically impossible because your domain functionality and the data layer are tightly coupled.

In both of the above scenarios, you are producing legacy code at the speed of typing.

However, there is still hope. By adapting a few simple techniques and principles, you can change this doomed pattern of yours forever.

The Architectural Evolution

In the following, I will describe 3 phases in a typical architectural evolution for a standard enterprise software developer. Almost any developer will make it to phase 2, but the trick is to make it all the way through phase 3 which will eventually turn you into an architectural Ninja.

Evolution to Ninja

Phase 1 – Doing it Wrong

Most developers have heard about layered architecture, so very often the first attempt on an architecture will look something like this – two layers with separated responsibilities for frontend and backend functionality:


So far so good, but quit soon you will realize that it is a tremendous problem that the domain logic of your application is entangled into the platform-dependent frontend and backend.

Phase 2 – A Step Forward

Thus, the next attempt is to introduce a middle layer – a domain layer – comprising the true business functionality of your application:


This architecture looks deceptively well-structured and de-coupled. However, it is not. The problem is the red dependency arrow indicating that the domain layer has a hard-wired dependency on the backend – typically, because you are creating instances in the domain layer of backend classes using the new keyword (C# or Java). The domain layer and the backend are tightly coupled. This has numerous disadvantages:

  • The domain layer functionality cannot be reused in isolation in another context. You would have to drag along its dependency, the backend.
  • The domain layer cannot be unit tested in isolation. You would have to involve the dependency, the backend.
  • One backend implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example file persistence).

All of these disadvantages dramatically reduces the potential lifetime of the domain layer. That is why you are producing legacy code at the speed of typing.

Phase 3 – Doing it Right

What you have to do is actually quite simple. You have to turn the direction of that red dependency arrow around. It is a subtle difference, but one that makes all the difference:


This architecture adheres to the Dependency Inversion Principle (DIP) – one of the most important principles of object-oriented design. The point is that once this architecture is established – once the direction of that dependency arrow is turned around – the domain layer dramatically increases its potential lifetime. UI requirements and trends may switch from Windows to browsers or mobile devices, and your preferred persistence mechanism might change from being RDBMS-based to file-based, but now that is all relatively easily exchangeable without modifying the domain layer. Because at this point the frontend as well as backend is de-coupled from the domain layer. Thus, the domain layer becomes a code library that you theoretically never ever have to replace – at least as long as your business domain and overall programming framework remain unchanged. Now, you are efficiently fighting that legacy code.

On a side note, let me give you one simple example on how to implement DIP in practice:

Maybe you have a product service in the domain layer that can perform CRUD operations on products in a repository defined in the backend. This very often leads to a dependency graph like the one shown below, with the dependency arrow pointing in the wrong direction:


This is because somewhere in the product service you will “new” up a dependency to the product repository:

To inverse the direction of the dependency using DIP, you must introduce an abstraction of the product repository in form of an IProductRepository interface in the domain layer and let the product repository be an implementation of this interface:


Now, instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

This is known as dependency injection (DI). I have previously explained this in much more detail in a blog post called Think Business First.

Once you have established the correct overall architecture, the objective of the fight against legacy code should be obvious: move as much functionality as you can into the domain layer. Make those frontend and backend layers shrink and make that domain layer grow fat:


A very convenient bi-product of this architecture is that it makes it easy to establish unit tests of the domain functionality. Because of the de-coupled nature of the domain layer and the fact that all of its dependencies are represented by abstractions (such as an interface or an abstract base class), it is quite easy to establish fake objects of these abstractions and use them when establishing unit test fixtures. So it is “a walk in the park” to guard the entire domain layer with unit tests. You should strive for nothing less than a 100% unit test coverage – making your domain layer extremely robust and solid as a rock. Which again will increase the lifetime of the domain layer.

You are probably starting to realize that not only traditional frontends or backends, but all other components – including the unit tests or for example an http-based Web API – should act as consumers of the domain layer. Thus, it makes a lot of sense to depict the architecture as onion layers:


The outer layer components consume the domain library code – either by providing concrete implementations of domain abstractions (interfaces or base classes) or as a direct consumer of domain functionality (domain model and services).

However, still remember: the direction of coupling is always toward the center – toward the domain layer.

At this point, it might all seem a bit theoretic and, well…, abstract. Nevertheless, it does not take a lot to do this in practice. In another CodeProject article of mine I have described and provided some sample code that complies with all of the principles in this article. The sample code is simple, yet very close to real production code.


Being an enterprise software developer is a constant battle to avoid producing legacy code at the speed of typing. To prevail, do the following:

  • Make sure all those dependency arrows point toward the central and independent domain layer by applying the Dependency Inversion Principle (DIP) and Dependency Injection (DI).
  • Constantly nourish the domain layer by moving as much functionality as possible into it. Make that domain layer grow fat and heavy while shrinking the outer layers.
  • Cover every single functionality of the domain layer by unit tests.

Follow these simple rules and it will all come together. The code that you write will potentially have a dramatically longer lifetime than before because:

  • The domain layer functionality can be reused in many different contexts.
  • The domain layer can be made robust and solid as a rock with a 100% unit test coverage.
  • Implementations of domain layer abstractions (for example persistence mechanisms) can easily be replaced by alternative implementations.
  • The domain layer is easy to maintain.

How I came to love COM interoperability

Well, maybe the title of this post is slightly exaggerated, but this is the story about how I – despite strong reluctance to do so – successfully managed to expose a .NET library of mine to COM. Furthermore, during this process, my original .NET library became even better and I actually ended up kind of liking the additional COM API.

Obviously, as most other .NET developers, I have had to deal with the tremendous amount of unmanaged code out there. Even if I would rather avoid it, from time to time, I have had to use the .NET interoperability services to consume some ActiveX component or other type of unmanaged legacy code. And I have learned to live with it. But why would someone ever think of exposing a nice and clean .NET library for a development platform (COM) that was deprecated decades ago?

Anyway, recently I found myself in a situation where I was left no choice. I had made this terrific .NET library with loads of nice functionality when the client required that the same functionality was made available through COM.

There are lots of explanations out there on the Internet on how to expose .NET assemblies to COM – for example this article on CodeProject. The reason why I bother to write this post anyway is that none of the resources that I have found on the Internet describes the exact approach that I eventually chose for my project. Also, I did not find any resources giving an overview of all the small challenges I met in the process. And as we all know, the devil is in the detail.

The Example Code

The example code for this post is based on the legendary “Hello World!” example. Compared to the normally very simple structure of such examples, my code might seem unnecessarily complicated to overcome the simple task of displaying the famous text message, but still it is relatively simple and elegantly illustrates all of the challenges that I met when working with the real code base.

The central class in the example library is a Greeter class that has a dependency to an IMessageWriter instance that is injected into the Greeter class through the constructor (yes, this is dependency injection in action):

The IMessageWriter instance is used in the Greet method to write the message. The GreetingType decides the exact phrasing of the greeting (much more about this later). The IMessageWriter interface contains a single Write method:

The example library comes with a single concrete implementation of the IMessageWriter interface – a ConsoleMessageWriter that writes a text message to the console:

From a console application the following code creates a silly greeting:

The Overall Approach

Now, let’s dig into the matter. Probably because of my initial reluctance to deal with the COM interoperability at all, I decided to make a clear rule for myself – a self-imposed dogma so to speak. I would under no circumstances “pollute” my original .NET library with any COM-related stuff such as COM-specific interfaces or any of the ComVisible-, Guid– or ClassInterface attributes. I would allow no references to the System.Runtime.InteropServices namespace whatsoever. Also, I would not accept major degradations of my original library. So I ended up with a project structure like this:


All the COM-specific stuff is encapsulated in the ClassLibrary.Interop assembly while my original ClassLibrary assembly remains a clean .NET library.

In the ClassLibrary.Interop assembly I explicitly define all the COM interfaces and decorate them with the Guid attribute:

Furthermore I create new classes inheriting from the original ones and implementing the corresponding explicitly defined COM interface. I decorate the classes with the Guid attribute and the ClassInterface attribute with the ClassInterfaceType.None parameter. The ClassInterfaceType.None parameter prevents the class interface from being automatically generated when the class metadata is exported to a COM type library. So in the below example, only the members of the IGreeter interface will be exposed:

I don’t bother decorating the individual classes with the ComVisible attribute because the whole point is that in the ClassLibrary.Interop assembly I only deal with .NET types that I want to expose for COM, so instead I declare this once and for all in the AssemblyInfo file:

Dealing with the Challenges

As mentioned earlier, I met a few challenges on the way – mostly because of .NET/C# features that are not supported in COM. In the following, I will describe the individual challenges and the solutions to them.

Constructors with parameters

COM does not support constructors with parameters. COM requires default (parameterless) constructors.

As shown earlier, the Greeter class uses dependency injection and requires an instance of an IMessageWriter interface provided through its constructor:

So what I did was that I created an additional protected parameterless default constructor and a protected MessageWriter property. The fact that these two additional members are protected is an important point because then I can use them from my Greeter extension class in the ClassLibrary.Interop assembly to provide COM interoperability while still hiding these members from “normal” use of the Greeter class within the .NET Framework – thus forcing the consumer to use the public constructor:

Then I can introduce an Initialize method in the COM interface of the Greeter class and use this method to set the MessageWriter property.

So now, from a COM consumer I will have to first create the Greeter object using the default constructor and then call the Initialize method.

Overloaded methods

Overloaded methods are not supported in COM. In the Greeter class I do have two Greet methods with different signatures – one always making a neutral greeting and one where I can provide a specific greeting type as a parameter:

The only way to deal with this problem is to introduce different names in the COM interface:


.NET generics is gibberish for COM. So if you have made any generic classes or methods or if you use any of the built-in generic types then you have to be a bit creative. In the Greeter class I am using the generic ReadOnlyCollection<> to keep the greeting history:

The solution to this problem is pretty straight forward. Simply let the Greeter extension in the ClassLibrary.Interop assembly return an arrays of strings instead:


One challenge that I met is in a different category than the others. This challenge was not due to the missing COM support for certain .NET/C# features. Rather, it was due to my self-imposed dogma about keeping my original .NET library free of COM-related stuff. As I wanted to extent the original .NET types with COM interoperability using inheritance, only inheritable types could be extended. .NET types like struct and enum are not inheritable.

So I had to change a couple of structs to classes in my original library, which didn’t really bother me too much.

The enums, however, were a bit trickier. What I did was to introduce my own Enumeration class instead of using enums. This was one of the changes that I actually consider a major improvement to my original code. I have always found it annoying that enums could not be extended with for example a display name (for example including spaces). By introducing an Enumeration class, exactly this can be done:

The whole discussion about using enumeration classes instead of enums is worth a post by itself, but another advantage worth mentioning is that this approach can reduce the number of switch statements that inevitably follows from the usage of enums. Look how elegantly the greeting text, in the form of the Greeting property, has become a detail of a greeting type:

Now the individual greeting types can be defined, e.g. a neutral greeting type:

Or a silly greeting type:

Static methods

The GreetingType enumeration class brings us to the last of the challenges. In the GreetingType enumeration class I define 3 static methods – one for each of the greeting types.

But unfortunately static methods are not supported in COM. So, for the COM interface I have to expose the 3 greeting type classes instead – here illustrated by the GreetingTypeCasual class:

This is why I had to make the original greeting types public. If I wasn’t going to expose my assembly to COM, I would have made the GreetingTypeNeutral (and the other greeting types) internal – or even private classes within the GreetingType class.

COM Registration

When all challenges are overcome and the ClassLibrary.Interop assembly is ready, it must be properly registered.

In my ClassLibrary.Interop project I have checked the “Register for Com interop” option under the projects Build properties. This will do the trick on your own machine.

If you want to deploy the COM version of the library to other machines, you have to use the assembly registration tool RegAsm. If you call it from a Windows batch file placed in the same folder as the assembly itself, you can for example use the following syntax:

This approach requires that the assembly is signed with a strong name (even if not put in the GAC).

My guess is that most COM consumers run in 32 bit. If you want to register for 64 bit consumers, you should call the 64 bit version of RegAsm found in c:\Windows\Microsoft.NET\Framework64.

VBA Sample

And finally, here is some sample code using the COM API from a Visual Basic for Applications (VBA) macro:


This post describes an approach for exposing a .NET assembly to COM by handling all the COM-specifics in a dedicated ClassLibrary.Interop assembly without having to compromise the original ClassLibrary assembly.

Exposing .NET assembly functionality to COM does not necessarily need to be a hassle. Yes, there are indeed some challenges to overcome and, for sure, my personal preference will always be to use the .NET assembly directly. However, I do see some advantages in providing a dedicated COM API acting as a sort of “higher level” scripting API for other than hardcore .NET programmers. I kind of like the way that the explicitly defined COM interfaces in the ClassLibrary.Interop assembly acts as a facade to the full functionality, and how for example abstract base classes and interfaces are hidden to the COM API user.

The source code can be downloaded from my CodeProject article .

Lightweight Domain Services Library

If you have more than a few years of experience within domain-driven design (DDD), most certainly, you have recognized some kind of overall pattern in the type of problems you have to solve – regardless of the type of applications you are working on. I certainly know that I have.

No matter whether you develop desktop applications, web applications or web API’s, you will almost always find yourself in a situation where you have to establish a mechanism for creating, persisting and maintaining state of various entities in the application domain model. So, every time you start up a new project you have to do a lot of yak shaving to establish this persistence mechanism, when what you really want to do is to work on establishing the domain model – the actual business functionality of your application.

After several iterations through various projects, I have established a practice that works for me in almost any situation. This practice allows you to abstract the entity persistence (the yak shaving…) so that you can easily isolate the nitty-gritty implementation details of this and focus on developing your genuine business functionality. Eventually, of course you have to deal with the implementation of the persistence layer, but the value of being able to develop – and not at least test – your domain model in isolation without having to care about the persistence details is tremendous. Then, you can start out with developing and testing your domain model against fake repositories. Whether you eventually end up making simple file based repositories or decide to go full-blown RDBMS doesn’t matter at this point in time.

I have digested this practice of mine into something I call a Domain Services Library and written a CodeProject article about this. This framework is super lightweight comprising only a few plain vanilla C# classes. No ORM is involved – the repositories can be anything from in-memory objects to RDBMS. No 3rd party dependencies whatsoever. Source code download is provided in the article.

REST API versioning

In any client/server system, managing changes in the server can be challenging. RESTful web services are certainly no exception – especially publically available APIs. The consumers of a RESTful web service (the clients) rely on the web service to not break the contract.

Normally, the main concern will be to maintain backward compatibility. This means that if you update the server you must ensure that existing consumers will still work flawlessly.

There are several techniques to avoid disruptive changes and thus maintain compatibility – for example continuing to support existing query parameter formats, even if new ones are introduced, treating new query parameters as optional, not removing or renaming fields from resource representation bodies etc.

You should always do everything possible to avoid versioning of your web service (Cool URIs don’t change). However, even when doing your utmost, you might eventually find yourself in a situation where maintaining compatibility is impossible and some kind of versioning of your web service is inevitable.

Once you reach this point, there are two principal techniques to choose from:

  • URI versioning
  • Media type versioning

URI versioning involves including a version number in the URI. You do not necessarily have to version every resource of your service. Here is an example:

URI versioning is definitely regarded as good practice, and many well-known RESTful APIs uses URI versioning – for example LinkedIn and Groupon. However, the solution I will describe in more details is the other one: media type versioning.

Media type versioning is based on content negotiation using the HTTP headers Accept and Content-Type. The idea is that the web service, for each incoming request, has defined what version of a resource representation (aka. the media type) that it can consume and/or produce.

Likewise, when the consumer makes a request, it must include headers defining the version of the resource representation that it provides (if any) and the version of the resource representation that it expects to receive back (if any). Then the web service can easily detect whether it supports the provided/requested version of a representation. If not, it can respond with a well-defined error code.

The media types must be defined as so called vendor-specific media types. Vendor-specific media types are specialized alternatives to the standard media types such as application/xml, application/json etc. The vendor-specific media types can comprise information about the resource type (e.g. user), the version (e.g. v2) and the format (e.g. JSON):

Each GET request must comprise an Accept header defining the resource representation that the consumer expects to get back. In the below example, a request to get a representation of the user called john_doe is shown:

If the web service cannot provide the resource representation as stated in the Accept header, according to RFC2616, a response with HTTP error code 406 (Not Acceptable) shall be returned.

Each POST and PUT request providing a request body with a resource representation must provide a Content-Type header defining the resource representation. In the below example, a request to update the profile of a user called john_doe is shown:

If the web service does not accept the resource representation as stated in the Content-Type header, according to RFC2616, a response with HTTP error code 415 (Unsupported Media Type) shall be returned.

Here is a recipe on how this versioning approach can be easily applied in a Jersey (Java) based solution:

For each resource representation class, include a static field defining the vendor-specific media type. For a User class it can look for example like this:

In the web service class, use this static field when setting the media type:

The cool thing is that Jersey automatically responds with the correct HTTP error codes (406 and 415) if the media types (the versions) don’t match.

It’s my personal experience that the majority of contract-breaking changes in RESTful web services come from breaking changes in the resource representations rather than in the resource model of the web service – i.e. the URI’s (the resource identifiers) of your service. This is why, in many cases, the media-type versioning solves most of your versioning challenges – while keeping the URI’s of your service intact.

REST with Java in practice

RESTful web services are generally hyped these days – and for many good reasons: among others, the fact that they are easily consumed by almost any kind of client – browsers, mobile apps, desktop apps etc.

One technology stack for building restful services in a Java environment could comprise Jersey, Gson and Guice (nice alliteration, by the way…). Without prior knowledge to any of these technologies, me and my team managed to successfully establish a RESTful web service consumed by for example this website.

I will briefly introduce these 3 frameworks:

Jersey and JAX-RS

Jersey­ is one of several implementations of the JAX-RS interface – the Java API for RESTful web services.

Jersey provides a servlet that analyses an incoming HTTP request by scanning underlying classes for RESTful resources, and selecting the correct class and method to respond to this request. The RESTful resources are defined by decorating classes and methods with the appropriate JAX-RS annotations.

If you for example have a UserService class that you want to expose through a RESTful API, you can wrap it in a UserWebService class and decorate this class and its methods with JAX-RS annotations:

The @Path annotation specifies on which (relative) URL path this method will be invoked. The @Get annotation specifies that the http method GET has to be used and the @Produces annotation declares the format of the response.

So, the following http-request:

GET http://localhost:8080/myservice/api/user/list

will invoke the GetUserList() method, which basically is a pass-through to the UserService.getAll() method, and return a response with a list of users in JSON format.

JSON support using Gson

One of the decisions you have to make when establishing a RESTful service is which representation formats (media types) to support. Very often JSON will be the obvious choice – especially if the services are to be consumed by browser-based clients which typically use JavaScript.

In order to produce and consume JSON you need a serialization mechanism that turns a Java object into a JSON document and vice versa (under-the-hood the representation bodies will very often be POJO objects). Our choice was to use Google Gson for this purpose.

You simply need to implement the two interfaces and, and decorate the implementing classes with the JAX-RS @Provider annotation. Here is the writer:

And here is the reader:

Guice as DI container

In a previous post I showed how to use Guice as a DI container in a Jersey application. So, what is left now is to bind the Gson writer and reader – ass well as other types, such as the RESTful resource classes – in the Guice injector:

To summarize, a well-proven technology stack for implementing a RESTful web service in Java comprises Jersey­ as the REST framework, Google Guice as the DI container to support dependency Injection and Google Gson for JSON serialization and de-serialization of the representation body objects. The service can be deployed on for example a Glassfish server.

Implementing Role-Based Authorization using Guice

Besides magically composing the entire object graph for your application, one of the main advantages of introducing a DI container is its ability to perform runtime interception. Because you are delegating the responsibility of composing objects to a 3rd party component (the DI container) you also give this container the possibility to intercept a call from the consumer to a service and execute some additional code before passing the call on to the service itself.

This is a perfect way to implement cross-cutting concerns (aspects) such as logging, validation, authorization etc. so that the underlying services don’t have to care about this and can easily follow the Single Responsibility Principle.

In this post, I will show you an example of how to implement role-based authorization using aspect oriented programming (AOP) in Guice.

The example is a RESTful web service built using Jersey­. The web service is ­using the basic authentication scheme which forces the client to authenticate itself with a user ID and a password for each request. The client must compute a Base64 encoding of <user>:<Password> and include the value in an Authorization header in the request.

The web service operates with 4 different roles: Guest, User, Editor and Admin.

The roles are meant to be hierarchical in the sense that e.g. an editor inherits all the privileges of a guest and a user. This is done by defining a RoleSet enum with an abstract method called getRoles(), which each enum has to override.

So each user does not have a role, but a role set. Through the hasRole() method it can be queried whether a user has the privileges of a particular role:

Now, to setup the role-based authorization, you need to do the following:

  1. Create a new annotation
  2. Implement the method interceptor
  3. Decorate the methods that must be protected with the new annotation
  4. Register the interceptor

Creating the annotation

First, you need to define an annotation that can be used to decorate the methods that you want protected by role-based authorization:

Implementing the interceptor

The interceptor itself is made by implementing the org.aopalliance.intercept.MethodInterceptor interface:

Through the methodInvocation variable, the interceptor has the opportunity to inspect the call: the method, its arguments, and the receiving instance. In this case, you will inspect the httpHeaders argument to get and parse the Authorization header of the http request. The Authorization header includes a Base64 encoding of <user>:<Password>, so you can decode this string and extract the username and password of the caller. The required role is extracted from the @Authorize annotation. If the calling user does not have the required role, or if the provided password is wrong, an appropriate exception is raised. Otherwise, the intercepted method is allowed to proceed.

Decorating the methods

The usage of the @Authorize annotation is straight-forward, simply decorate the method and declare the required role in the annotation parameter.

Register the interceptor

Finally, the interceptor needs to be registered. You need to create matchers for the classes and methods to be intercepted. In this case you must match any class – but only the methods decorated with the @Authorize annotation. Because you need to inject dependencies into the interceptor, use requestInjection() alongside the standard bindInterceptor() call.

As for all other calls to Guice methods, this code must reside in the bootstrapper component of the application where the whole object graphs is wired up.

Now, if a method decorated with the @Authorize annotation is called, the autorizationInterceptor will be executed before the method itself. This is Guice and AOP in action.

Dependency Injection with Java using Guice

I generally code in .NET, and in a previous post I described how to use Microsoft Unity as the DI container in a ASP.NET MVC project. Also, the whole inspiration to dig into DI came from the book Dependency Injection in .NET. However, the first “real-life” project where I decided to let DI be the driving design principle happened to be a Java-project…

Anyway, that constraint turned out to be no problem whatsoever – thanks to the fact that the above mentioned book reaches way beyond the .NET framework in the description of DI techniques, and the fact that Google provides the terrific DI container Guice for Java.

One of the golden rules of DI is not to “new” up objects. Guice introduces the @Injection annotation as an alternative to the new keyword. You can think of @Injection as the new new

To prepare for constructor injection, you have to add the @Inject annotation to your constructor:

Then, during object composition, all of the dependencies of that constructor will be automatically filled in by Guice.

You might argue that by adding this @Inject annotation, you add a dependency in your UserService class to Guice itself. However, Guice does support the standard JSR 330 annotations, so you actually don’t need to introduce Guice specific annotations in your code at all.

When composing the object graph, Guice uses bindings to map types to their actual implementations. The bindings define how dependencies are resolved during object composition. For example, to tell Guice which implementation to use for the IRepository<User> interface, you will need linked binding. The below example maps the IRepository<user> interface to the UserRepository class using the to() clause.

Note that because IRepository<User> is a generic interface, an anonymous subclass of TypeLiteral must be used in the declaration.

Now that the IRepository<user> interface mapping is in place, you can use untargeted binding to bind the concrete UserService class. An untargeted binding has no to() clause:

All the binding declarations must be gathered in the configure() method of a module. A module is a class extending the AbstractModule class:

The actual object composition is done using a so called injector:

Obviously, this kind of code shall not be scattered all over your code base. All calls to Guice types – including the injector – should be isolated in some top-level component – the composition root (bootstrapper) of the application where the whole object graphs must be wired up.

I showed in a previous post how ASP.NET MVC has built-in support for the Unity DI-container. Likewise, Jersey­­ – the Java library for building REST API’s – has seamless support for Guice so that you don’t have to manually call the getInstance() method to create objects. Guice Servlet provides a utility that you can subclass in order to register your own ServletContextListener:

To create a Guice injector you need to pass a JerseyServletModule where you are overriding the configureServlets() method. In here you must define the Guice bindings.

As most other DI containers, Guice also provides Lifetime management. The lifetime of objects can be handled through scopes. Guice supports the scopes singleton, session and request . Scopes can be configured in the bind statements using the in() clause. Here is an example of setting the scope of the user repository to singleton:

The final Guice feature I will mention is support for method interception through aspect oriented programming. This is a very powerful feature for solving cross-cutting concerns such as logging or authorization in your application. In a later post I will show how to implement role based authorization using aspect oriented programming in Guice.

Putting it all Together – DI part 4

Dependency Injection – and the low coupling between components that it leads to – goes hand in hand with high cohesion. It is now time to grab the individual components and put them together to form a “real” application.

In the first post in this series, it was explained how the whole point of dependency injection is to remove the burden of composing objects away from the individual components themselves, and instead delegate this responsibility to a single well-defined location as close as possible to the entry point of the application – also denoted the composition root of the application.

This object composition can very well be done manually by simply “newing” up all the objects – which is sometimes referred to as “poor man’s DI” – but a good alternative is to leave the responsibility of solving the object graph to a DI container. A DI container is a third-party library that can automate the object composition and lifetime management. Furthermore, some DI containers support runtime interception which is a very powerful technique for solving cross-cutting concerns such as logging or authorization (more about this in a later post).

And yes, when using a DI container you are, ironically enough, introducing a new dependency to solve the dependencies! But obviously, the DI container object itself should be created manually, and the DI container library should only be referenced from the composition root.

Anyway, here is an example of wiring up the application using Microsoft’s DI container called Unity in an ASP.NET MVC application. Adding a reference to the Unity.Mvc3 library (for example using the NuGet Package Manager) will automatically create a static helper class called Bootstrapper. In the BuildUnityContainer() method you need to register which concrete type should be mapped to the IRepository<Product> abstraction during run time. In this case an XmlProducRepository class is used. XmlProductRepository itself has a dependency to a string defining the path to the XML file used as physical repository.

To use ProductService in one of the controllers (e.g. the HomeController), you need to inject ProductService using constructor injection:

That’s it. The DI container takes care of the rest.

The first time you are introduced to the concept of DI containers you might become a bit mystified, and even worried, about all the “magic” that apparently goes on behind the scenes. I certainly know that I was. However, trying to dig a bit deeper into what actually goes on might help on this scepticism. This is what happens during an incoming request to go to the Home page of the application:


MvcApplication receives a request to go to Home the page. The DependencyResolver is asked to resolve HomeControler (i.e. create the whole object graph) – and this is where the magic starts! The dependency resolver detects the dependencies (HomeControler -> ProductService -> IRepository<Product> -> string) and starts creating the object graph from the bottom and up. First an instance of ProductRepository is created. During registration you declared that this was the concrete type to be used for the IRepository<Product> abstraction. You also declared the path to the physical file “c:\data\repository.xml” during registration. Then this ProductRepository instance is injected into ProductService, using constructor injection, when creating the ProductService instance. Finally, this ProductService instance is injected into the HomeController when creating the HomeController instance. The dependency resolver has done its job for this incoming request.

Subsequently, the Index() method of the HomeControler is called and the HomeController can use the injected ProductService to retrieve a list of products, which can then be displayed in the browser.

This is how the dependency graph of your application looks:


MvcApplication (found in the Global.asax file) acts as the composition root taking care of object composition. The business component has no dependencies to other components, so the Dependency Inversion Principle is still respected.

Unit Testing Made Easy – DI part 3

I claimed in a previous post that low coupling using dependency injection made the code base more testable – i.e. properly prepared for unit testing. Let’s dig a bit deeper into that assertion.

The ProductService class is an obvious candidate for unit testing. It is a relatively small component with a well-defined responsibility (adhering to the Single Responsibility Principle). It is also properly isolated from its dependency (the repository), by an abstraction (the interface). Let’s create a unit test method for the ProductService component:

This unit test method verifies that the ProductService functionality for calculating a discounted price of a product works correctly. It follows the standard sequence of a unit test: First it sets up the fixed baseline environment for the test (also called the test fixture). Then it exercises the system under test (in this case the product service). Finally, it verifies the expected outcome. A “tear down” phase is not necessary, as the fixture objects automatically gets out of scope and will be garbage-collected.

As ProductService does not care about the actual implementation of the product repository dependency, you can inject a “stand-in” for this dependency in the test. This stand-in is better known as a test double. The mockRepository variable holds an instance of such a product repository test double.

In the final application you are probably going to implement the repository so that the products are persisted in for example an SQL database, or maybe a file, but the elegant thing is that, at this moment in time, you do not need to care about this. In the context of the unit test, you can just make a mock implementation of the repository which does not implement persistence of the products at all, but just keep them in memory. This is our test double. Obviously, an implementation like this would never make it into the final application, but it is sufficient to test the ProductService functionality in isolation.

Such a mock implementation of a repository is easily done. Of course you make a generic version that can be used as test double for all entity repositories:

A Dictionary object is used to hold the entities in memory during the test.

Testability is not necessarily the main purpose for doing dependency injection, but the ability to replace dependencies with test-specific mock objects is indeed a very useful by-product.

By the way, the unit test method above is written using the testing framework, which explains the Fact attribute and the Equal assertion. is a nice and very lean testing framework – compared to for example the MSTest, which is the one integrated with Visual Studio. With you don’t need to create a specific test unit project. Also, you get rid of the auto-generated .vsmdi files and .testsettings files from MSTest.

To further refine and automate your unit tests, you should consider using supplementary unit test frameworks like AutoFixture and Moq to help you streamline fixture setup and mocking. Both are available from within the “NuGet Package Manager” Visual Studio Extension. I have written a comprehensive CodeProject article about using, AutoFixture and Moq.