Tag Archives: Dependency Injection

How to Avoid Producing Legacy Code at the Speed of Typing

This blog post provides a recipe on how to avoid producing legacy code at the speed of typing by using a proper architecture and unit testing.

Introduction

As an enterprise software developer, you are in a constant fight against producing legacy code – code that is no longer worthy of maintenance or support. You are constantly struggling to avoid re-writing stuff repeatedly in a faint hope that next time you will get it just right.

The characteristics of legacy code is, among others, poor design and architecture or dependencies on obsolete frameworks or 3rd party components. Here are a few typical examples that you might recognize:

You and your team produced a nice, feature-rich Windows application. Afterwards, you realize that what was really needed was a browser or mobile application. That is when you recognize the tremendous effort it would take to provide an alternative UI to your application, because you have embedded too much domain functionality within the UI itself.

Another scenario might be that you made a backend that is deeply infiltrated in a particular ORM – such as NHibernate or Entity Framework – or highly dependent on a certain RDBMS. At one point, you want to change backend strategy to avoid ORM and use file-based persistence, but then you realize it is practically impossible because your domain functionality and the data layer are tightly coupled.

In both of the above scenarios, you are producing legacy code at the speed of typing.

However, there is still hope. By adapting a few simple techniques and principles, you can change this doomed pattern of yours forever.

The Architectural Evolution

In the following, I will describe 3 phases in a typical architectural evolution for a standard enterprise software developer. Almost any developer will make it to phase 2, but the trick is to make it all the way through phase 3 which will eventually turn you into an architectural Ninja.

Evolution to Ninja

Phase 1 – Doing it Wrong

Most developers have heard about layered architecture, so very often the first attempt on an architecture will look something like this – two layers with separated responsibilities for frontend and backend functionality:

phase_1

So far so good, but quit soon you will realize that it is a tremendous problem that the domain logic of your application is entangled into the platform-dependent frontend and backend.

Phase 2 – A Step Forward

Thus, the next attempt is to introduce a middle layer – a domain layer – comprising the true business functionality of your application:

phase_2

This architecture looks deceptively well-structured and de-coupled. However, it is not. The problem is the red dependency arrow indicating that the domain layer has a hard-wired dependency on the backend – typically, because you are creating instances in the domain layer of backend classes using the new keyword (C# or Java). The domain layer and the backend are tightly coupled. This has numerous disadvantages:

  • The domain layer functionality cannot be reused in isolation in another context. You would have to drag along its dependency, the backend.
  • The domain layer cannot be unit tested in isolation. You would have to involve the dependency, the backend.
  • One backend implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example file persistence).

All of these disadvantages dramatically reduces the potential lifetime of the domain layer. That is why you are producing legacy code at the speed of typing.

Phase 3 – Doing it Right

What you have to do is actually quite simple. You have to turn the direction of that red dependency arrow around. It is a subtle difference, but one that makes all the difference:

phase_3a

This architecture adheres to the Dependency Inversion Principle (DIP) – one of the most important principles of object-oriented design. The point is that once this architecture is established – once the direction of that dependency arrow is turned around – the domain layer dramatically increases its potential lifetime. UI requirements and trends may switch from Windows to browsers or mobile devices, and your preferred persistence mechanism might change from being RDBMS-based to file-based, but now that is all relatively easily exchangeable without modifying the domain layer. Because at this point the frontend as well as backend is de-coupled from the domain layer. Thus, the domain layer becomes a code library that you theoretically never ever have to replace – at least as long as your business domain and overall programming framework remain unchanged. Now, you are efficiently fighting that legacy code.

On a side note, let me give you one simple example on how to implement DIP in practice:

Maybe you have a product service in the domain layer that can perform CRUD operations on products in a repository defined in the backend. This very often leads to a dependency graph like the one shown below, with the dependency arrow pointing in the wrong direction:

DI_1

This is because somewhere in the product service you will “new” up a dependency to the product repository:

To inverse the direction of the dependency using DIP, you must introduce an abstraction of the product repository in form of an IProductRepository interface in the domain layer and let the product repository be an implementation of this interface:

DI_2

Now, instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

This is known as dependency injection (DI). I have previously explained this in much more detail in a blog post called Think Business First.

Once you have established the correct overall architecture, the objective of the fight against legacy code should be obvious: move as much functionality as you can into the domain layer. Make those frontend and backend layers shrink and make that domain layer grow fat:

phase_3b

A very convenient bi-product of this architecture is that it makes it easy to establish unit tests of the domain functionality. Because of the de-coupled nature of the domain layer and the fact that all of its dependencies are represented by abstractions (such as an interface or an abstract base class), it is quite easy to establish fake objects of these abstractions and use them when establishing unit test fixtures. So it is “a walk in the park” to guard the entire domain layer with unit tests. You should strive for nothing less than a 100% unit test coverage – making your domain layer extremely robust and solid as a rock. Which again will increase the lifetime of the domain layer.

You are probably starting to realize that not only traditional frontends or backends, but all other components – including the unit tests or for example an http-based Web API – should act as consumers of the domain layer. Thus, it makes a lot of sense to depict the architecture as onion layers:

phase_3c

The outer layer components consume the domain library code – either by providing concrete implementations of domain abstractions (interfaces or base classes) or as a direct consumer of domain functionality (domain model and services).

However, still remember: the direction of coupling is always toward the center – toward the domain layer.

At this point, it might all seem a bit theoretic and, well…, abstract. Nevertheless, it does not take a lot to do this in practice. In another CodeProject article of mine I have described and provided some sample code that complies with all of the principles in this article. The sample code is simple, yet very close to real production code.

Summary

Being an enterprise software developer is a constant battle to avoid producing legacy code at the speed of typing. To prevail, do the following:

  • Make sure all those dependency arrows point toward the central and independent domain layer by applying the Dependency Inversion Principle (DIP) and Dependency Injection (DI).
  • Constantly nourish the domain layer by moving as much functionality as possible into it. Make that domain layer grow fat and heavy while shrinking the outer layers.
  • Cover every single functionality of the domain layer by unit tests.

Follow these simple rules and it will all come together. The code that you write will potentially have a dramatically longer lifetime than before because:

  • The domain layer functionality can be reused in many different contexts.
  • The domain layer can be made robust and solid as a rock with a 100% unit test coverage.
  • Implementations of domain layer abstractions (for example persistence mechanisms) can easily be replaced by alternative implementations.
  • The domain layer is easy to maintain.

Lightweight Domain Services Library

If you have more than a few years of experience within domain-driven design (DDD), most certainly, you have recognized some kind of overall pattern in the type of problems you have to solve – regardless of the type of applications you are working on. I certainly know that I have.

No matter whether you develop desktop applications, web applications or web API’s, you will almost always find yourself in a situation where you have to establish a mechanism for creating, persisting and maintaining state of various entities in the application domain model. So, every time you start up a new project you have to do a lot of yak shaving to establish this persistence mechanism, when what you really want to do is to work on establishing the domain model – the actual business functionality of your application.

After several iterations through various projects, I have established a practice that works for me in almost any situation. This practice allows you to abstract the entity persistence (the yak shaving…) so that you can easily isolate the nitty-gritty implementation details of this and focus on developing your genuine business functionality. Eventually, of course you have to deal with the implementation of the persistence layer, but the value of being able to develop – and not at least test – your domain model in isolation without having to care about the persistence details is tremendous. Then, you can start out with developing and testing your domain model against fake repositories. Whether you eventually end up making simple file based repositories or decide to go full-blown RDBMS doesn’t matter at this point in time.

I have digested this practice of mine into something I call a Domain Services Library and written a CodeProject article about this. This framework is super lightweight comprising only a few plain vanilla C# classes. No ORM is involved – the repositories can be anything from in-memory objects to RDBMS. No 3rd party dependencies whatsoever. Source code download is provided in the article.

REST with Java in practice

RESTful web services are generally hyped these days – and for many good reasons: among others, the fact that they are easily consumed by almost any kind of client – browsers, mobile apps, desktop apps etc.

One technology stack for building restful services in a Java environment could comprise Jersey, Gson and Guice (nice alliteration, by the way…). Without prior knowledge to any of these technologies, me and my team managed to successfully establish a RESTful web service consumed by for example this website.

I will briefly introduce these 3 frameworks:

Jersey and JAX-RS

Jersey­ is one of several implementations of the JAX-RS interface – the Java API for RESTful web services.

Jersey provides a servlet that analyses an incoming HTTP request by scanning underlying classes for RESTful resources, and selecting the correct class and method to respond to this request. The RESTful resources are defined by decorating classes and methods with the appropriate JAX-RS annotations.

If you for example have a UserService class that you want to expose through a RESTful API, you can wrap it in a UserWebService class and decorate this class and its methods with JAX-RS annotations:

The @Path annotation specifies on which (relative) URL path this method will be invoked. The @Get annotation specifies that the http method GET has to be used and the @Produces annotation declares the format of the response.

So, the following http-request:

GET http://localhost:8080/myservice/api/user/list

will invoke the GetUserList() method, which basically is a pass-through to the UserService.getAll() method, and return a response with a list of users in JSON format.

JSON support using Gson

One of the decisions you have to make when establishing a RESTful service is which representation formats (media types) to support. Very often JSON will be the obvious choice – especially if the services are to be consumed by browser-based clients which typically use JavaScript.

In order to produce and consume JSON you need a serialization mechanism that turns a Java object into a JSON document and vice versa (under-the-hood the representation bodies will very often be POJO objects). Our choice was to use Google Gson for this purpose.

You simply need to implement the two interfaces javax.ws.rs.ext.MessageBodyWriter and javax.ws.rs.ext.MessageBodyReader, and decorate the implementing classes with the JAX-RS @Provider annotation. Here is the writer:

And here is the reader:

Guice as DI container

In a previous post I showed how to use Guice as a DI container in a Jersey application. So, what is left now is to bind the Gson writer and reader – ass well as other types, such as the RESTful resource classes – in the Guice injector:

To summarize, a well-proven technology stack for implementing a RESTful web service in Java comprises Jersey­ as the REST framework, Google Guice as the DI container to support dependency Injection and Google Gson for JSON serialization and de-serialization of the representation body objects. The service can be deployed on for example a Glassfish server.

Dependency Injection with Java using Guice

I generally code in .NET, and in a previous post I described how to use Microsoft Unity as the DI container in a ASP.NET MVC project. Also, the whole inspiration to dig into DI came from the book Dependency Injection in .NET. However, the first “real-life” project where I decided to let DI be the driving design principle happened to be a Java-project…

Anyway, that constraint turned out to be no problem whatsoever – thanks to the fact that the above mentioned book reaches way beyond the .NET framework in the description of DI techniques, and the fact that Google provides the terrific DI container Guice for Java.

One of the golden rules of DI is not to “new” up objects. Guice introduces the @Injection annotation as an alternative to the new keyword. You can think of @Injection as the new new

To prepare for constructor injection, you have to add the @Inject annotation to your constructor:

Then, during object composition, all of the dependencies of that constructor will be automatically filled in by Guice.

You might argue that by adding this @Inject annotation, you add a dependency in your UserService class to Guice itself. However, Guice does support the standard JSR 330 annotations, so you actually don’t need to introduce Guice specific annotations in your code at all.

When composing the object graph, Guice uses bindings to map types to their actual implementations. The bindings define how dependencies are resolved during object composition. For example, to tell Guice which implementation to use for the IRepository<User> interface, you will need linked binding. The below example maps the IRepository<user> interface to the UserRepository class using the to() clause.

Note that because IRepository<User> is a generic interface, an anonymous subclass of TypeLiteral must be used in the declaration.

Now that the IRepository<user> interface mapping is in place, you can use untargeted binding to bind the concrete UserService class. An untargeted binding has no to() clause:

All the binding declarations must be gathered in the configure() method of a module. A module is a class extending the AbstractModule class:

The actual object composition is done using a so called injector:

Obviously, this kind of code shall not be scattered all over your code base. All calls to Guice types – including the injector – should be isolated in some top-level component – the composition root (bootstrapper) of the application where the whole object graphs must be wired up.

I showed in a previous post how ASP.NET MVC has built-in support for the Unity DI-container. Likewise, Jersey­­ – the Java library for building REST API’s – has seamless support for Guice so that you don’t have to manually call the getInstance() method to create objects. Guice Servlet provides a utility that you can subclass in order to register your own ServletContextListener:

To create a Guice injector you need to pass a JerseyServletModule where you are overriding the configureServlets() method. In here you must define the Guice bindings.

As most other DI containers, Guice also provides Lifetime management. The lifetime of objects can be handled through scopes. Guice supports the scopes singleton, session and request . Scopes can be configured in the bind statements using the in() clause. Here is an example of setting the scope of the user repository to singleton:

The final Guice feature I will mention is support for method interception through aspect oriented programming. This is a very powerful feature for solving cross-cutting concerns such as logging or authorization in your application. In a later post I will show how to implement role based authorization using aspect oriented programming in Guice.

Putting it all Together – DI part 4

Dependency Injection – and the low coupling between components that it leads to – goes hand in hand with high cohesion. It is now time to grab the individual components and put them together to form a “real” application.

In the first post in this series, it was explained how the whole point of dependency injection is to remove the burden of composing objects away from the individual components themselves, and instead delegate this responsibility to a single well-defined location as close as possible to the entry point of the application – also denoted the composition root of the application.

This object composition can very well be done manually by simply “newing” up all the objects – which is sometimes referred to as “poor man’s DI” – but a good alternative is to leave the responsibility of solving the object graph to a DI container. A DI container is a third-party library that can automate the object composition and lifetime management. Furthermore, some DI containers support runtime interception which is a very powerful technique for solving cross-cutting concerns such as logging or authorization (more about this in a later post).

And yes, when using a DI container you are, ironically enough, introducing a new dependency to solve the dependencies! But obviously, the DI container object itself should be created manually, and the DI container library should only be referenced from the composition root.

Anyway, here is an example of wiring up the application using Microsoft’s DI container called Unity in an ASP.NET MVC application. Adding a reference to the Unity.Mvc3 library (for example using the NuGet Package Manager) will automatically create a static helper class called Bootstrapper. In the BuildUnityContainer() method you need to register which concrete type should be mapped to the IRepository<Product> abstraction during run time. In this case an XmlProducRepository class is used. XmlProductRepository itself has a dependency to a string defining the path to the XML file used as physical repository.

To use ProductService in one of the controllers (e.g. the HomeController), you need to inject ProductService using constructor injection:

That’s it. The DI container takes care of the rest.

The first time you are introduced to the concept of DI containers you might become a bit mystified, and even worried, about all the “magic” that apparently goes on behind the scenes. I certainly know that I was. However, trying to dig a bit deeper into what actually goes on might help on this scepticism. This is what happens during an incoming request to go to the Home page of the application:

ECommerce2_Rose

MvcApplication receives a request to go to Home the page. The DependencyResolver is asked to resolve HomeControler (i.e. create the whole object graph) – and this is where the magic starts! The dependency resolver detects the dependencies (HomeControler -> ProductService -> IRepository<Product> -> string) and starts creating the object graph from the bottom and up. First an instance of ProductRepository is created. During registration you declared that this was the concrete type to be used for the IRepository<Product> abstraction. You also declared the path to the physical file “c:\data\repository.xml” during registration. Then this ProductRepository instance is injected into ProductService, using constructor injection, when creating the ProductService instance. Finally, this ProductService instance is injected into the HomeController when creating the HomeController instance. The dependency resolver has done its job for this incoming request.

Subsequently, the Index() method of the HomeControler is called and the HomeController can use the injected ProductService to retrieve a list of products, which can then be displayed in the browser.

This is how the dependency graph of your application looks:

ECommerce_dependencies

MvcApplication (found in the Global.asax file) acts as the composition root taking care of object composition. The business component has no dependencies to other components, so the Dependency Inversion Principle is still respected.

Unit Testing Made Easy – DI part 3

I claimed in a previous post that low coupling using dependency injection made the code base more testable – i.e. properly prepared for unit testing. Let’s dig a bit deeper into that assertion.

The ProductService class is an obvious candidate for unit testing. It is a relatively small component with a well-defined responsibility (adhering to the Single Responsibility Principle). It is also properly isolated from its dependency (the repository), by an abstraction (the interface). Let’s create a unit test method for the ProductService component:

This unit test method verifies that the ProductService functionality for calculating a discounted price of a product works correctly. It follows the standard sequence of a unit test: First it sets up the fixed baseline environment for the test (also called the test fixture). Then it exercises the system under test (in this case the product service). Finally, it verifies the expected outcome. A “tear down” phase is not necessary, as the fixture objects automatically gets out of scope and will be garbage-collected.

As ProductService does not care about the actual implementation of the product repository dependency, you can inject a “stand-in” for this dependency in the test. This stand-in is better known as a test double. The mockRepository variable holds an instance of such a product repository test double.

In the final application you are probably going to implement the repository so that the products are persisted in for example an SQL database, or maybe a file, but the elegant thing is that, at this moment in time, you do not need to care about this. In the context of the unit test, you can just make a mock implementation of the repository which does not implement persistence of the products at all, but just keep them in memory. This is our test double. Obviously, an implementation like this would never make it into the final application, but it is sufficient to test the ProductService functionality in isolation.

Such a mock implementation of a repository is easily done. Of course you make a generic version that can be used as test double for all entity repositories:

A Dictionary object is used to hold the entities in memory during the test.

Testability is not necessarily the main purpose for doing dependency injection, but the ability to replace dependencies with test-specific mock objects is indeed a very useful by-product.

By the way, the unit test method above is written using the xUnit.net testing framework, which explains the Fact attribute and the Equal assertion. xUnit.net is a nice and very lean testing framework – compared to for example the MSTest, which is the one integrated with Visual Studio. With xUnit.net you don’t need to create a specific test unit project. Also, you get rid of the auto-generated .vsmdi files and .testsettings files from MSTest.

To further refine and automate your unit tests, you should consider using supplementary unit test frameworks like AutoFixture and Moq to help you streamline fixture setup and mocking. Both are available from within the “NuGet Package Manager” Visual Studio Extension. I have written a comprehensive CodeProject article about using xUnit.net, AutoFixture and Moq.

Using Abstractions to Ensure Low Coupling – DI part 2

As shown in my previous post the abstraction of the data layer component handling products – in the form of the IProductRepository interface – was the key to make the business layer independent of the data layer.

Generally, abstractions play a crucial part in ensuring low coupling between software components. These abstractions allow you to define the behaviour of a component without actually caring about the concrete type and implementation behind the abstraction. Low coupling is good for several reasons. It makes your code more extensible, maintainable and, maybe most importantly, more testable. I will come back to the latter in another post.

Of course you cannot entirely decouple everything. Even in the purest POCO component, there will obviously be dependencies to types in the .NET Framework. Rather, you should strive against the “natural” level of decoupling – whatever that is. The abstractions should form well-defined points of interaction – also called seams – between various components in your system with well-defined responsibilities (adhering to the Single Responsibility Principle).

For example, it definitely does make sense to create a seam between a component handling the business functionality of a product and the component that is responsible for the persistence (i.e. between the ProductService and the ProductRepository in the example in my previous post). If you did not introduce a seam here, it would be very difficult to replace for example a SQL Server database implementation in the data layer with some other kind of technology.

Abstractions are typically defined using interfaces. The IProductRepository could for example look like this:

However, as our application is likely to deal with other entities than products (for example customers and orders), it would make sense to generalize this interface using C# generics:

IEntity is a simple interface ensuring that an entity always has a unique ID and a name:

Another fundamental object-oriented design principle is the Interface Segregation Principle:

Clients should not be forced to depend on methods that they do not use.

This basically means that interfaces preferably should be as small and specific as possible. Actually, an interface with a single method can be a very good interface.

You might have some entity services for which you do not want to expose full CRUD functionality – a read-only entity service, so to speak. For this purpose, it would make sense to define a specific IReadOnlyRepository interface:

Then the IRepository interface could be simplified to an extension of the IReadOnlyRepository interface:

This is interface segregation in action. You transformed the one big “header” interface into a number of smaller more specific role interfaces.

Now, you have a nice well-defined seam between the product service – or any other entity service – and the corresponding repository.

Think Business First – DI part 1

When sitting with a blank Visual Studio project in front of you, ready to start up that new killer application of yours, where do you actually start?

Almost every programmer has heard about and used the highly acclaimed 3-layered architecture:

image

My guess is that most developers have a tendency to start developing either from the top with the presentation layer or from the bottom with the data layer. However, even if it is understandable to have some initial thoughts about the presentation layer and the data layer, the recommended starting point is the business layer – or the domain layer as some would call it.

The business layer is the natural starting point, because the business layer should have no dependencies to either the presentation layer or the data layer – or any 3rd party libraries for that matter. It should be possible to build and test the business layer in total isolation from the two other layers.

This might sound a bit awkward to some, because most developers probably have made tons of projects where the business layer is depending on the data layer. I know for sure that I have… But this is not the right way to do it. The business layer should consist of nothing else than POCO’s (Plain Old CLR Objects) – or POJO’s (Plain Old Java Objects) if you are in the Java world – and abstractions. This means that the business layer should not reference anything else than the basic .NET framework libraries (i.e. System, System.Core etc. – those files that are automatically added to your project file references when you create a new project in Visual Studio). No other dependencies whatsoever. Period!

Why is this so important? Because, besides simplifying establishing unit tests, it dramatically increases the maintainability and reusability of the whole code base.

Doing it wrong…

Let’s say you have a classical application with some sort of product service in the business layer that can perform CRUD operations on products in a repository in the data layer. This very often leads to a dependency graph like the one shown below:

image

This is because somewhere in the product service you will “new” up a dependency to the product repository:

So, you have a situation where the higher level component (the business layer) depends directly on a lower level component (the data layer). This has at least the following disadvantages:

  • The product service cannot be reused in isolation in another context
  • The product service cannot be unit tested in isolation
  • One product repository implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example an XML file for persistence)

Doing it Right…

Let’s say that instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

Now the dependency graph looks like this:

image

An abstraction of the product repository in form of an interface is introduced. It is the high level component (the business layer) that owns this abstraction. The product repository is an implementation of this interface.

But now the data layer is depending on the business layer? Yes, indeed. Everything is turned upside-down, because now this object graph adheres to the Dependency Inversion Principle (DIP):

High-level modules should not depend on low-level modules. Both should depend on abstractions.

Abstractions should not depend upon details. Details should depend upon abstractions.

DIP was first introduced by Robert C. Martin (Uncle Bob) back in the mid 90’s and is one of the main principles of object-oriented design.

Instead of directly creating an instance of the repository using the new keyword, the product service has relinquished control of the repository dependency and delegated that responsibility to a third party control. This technique is called Constructor Injection – a sub-pattern of Dependency Injection. How this instance of the repository is created and by whom is of no concern to the product service. You can push this burden all the way to the top of the application into a single component – what is referred to by Mark Seemann in his book Dependency Injection in .NET as the Composition Root of the application. Depending on the type of application, the actual composition root can vary, but more about this in the last post in this series.

Anyway, now the product service can be easily reused in another context and unit tested in isolation, because it is no longer depending on any particular implementation of the product repository. In the unit test, the fixture of the test class itself can act at the composition root creating an instance of some mock repository that can be injected into the product service. Furthermore, an alternative implementation of the repository can be easily created and used in the application.

So, it all begins with the business layer. During the lifetime of the project, make it a habit to regularly check those project references in the business layer project file to make sure no reference to some obscure 3rd party library – or even worse, the presentation layer or the data layer – has accidently sneaked in. You should consider using some tool, or even making automatic tests, to check that the POCO’s-policy for the business layer is enforced.