Monthly Archives: August 2013

Putting it all Together – DI part 4

Dependency Injection – and the low coupling between components that it leads to – goes hand in hand with high cohesion. It is now time to grab the individual components and put them together to form a “real” application.

In the first post in this series, it was explained how the whole point of dependency injection is to remove the burden of composing objects away from the individual components themselves, and instead delegate this responsibility to a single well-defined location as close as possible to the entry point of the application – also denoted the composition root of the application.

This object composition can very well be done manually by simply “newing” up all the objects – which is sometimes referred to as “poor man’s DI” – but a good alternative is to leave the responsibility of solving the object graph to a DI container. A DI container is a third-party library that can automate the object composition and lifetime management. Furthermore, some DI containers support runtime interception which is a very powerful technique for solving cross-cutting concerns such as logging or authorization (more about this in a later post).

And yes, when using a DI container you are, ironically enough, introducing a new dependency to solve the dependencies! But obviously, the DI container object itself should be created manually, and the DI container library should only be referenced from the composition root.

Anyway, here is an example of wiring up the application using Microsoft’s DI container called Unity in an ASP.NET MVC application. Adding a reference to the Unity.Mvc3 library (for example using the NuGet Package Manager) will automatically create a static helper class called Bootstrapper. In the BuildUnityContainer() method you need to register which concrete type should be mapped to the IRepository<Product> abstraction during run time. In this case an XmlProducRepository class is used. XmlProductRepository itself has a dependency to a string defining the path to the XML file used as physical repository.

To use ProductService in one of the controllers (e.g. the HomeController), you need to inject ProductService using constructor injection:

That’s it. The DI container takes care of the rest.

The first time you are introduced to the concept of DI containers you might become a bit mystified, and even worried, about all the “magic” that apparently goes on behind the scenes. I certainly know that I was. However, trying to dig a bit deeper into what actually goes on might help on this scepticism. This is what happens during an incoming request to go to the Home page of the application:

ECommerce2_Rose

MvcApplication receives a request to go to Home the page. The DependencyResolver is asked to resolve HomeControler (i.e. create the whole object graph) – and this is where the magic starts! The dependency resolver detects the dependencies (HomeControler -> ProductService -> IRepository<Product> -> string) and starts creating the object graph from the bottom and up. First an instance of ProductRepository is created. During registration you declared that this was the concrete type to be used for the IRepository<Product> abstraction. You also declared the path to the physical file “c:\data\repository.xml” during registration. Then this ProductRepository instance is injected into ProductService, using constructor injection, when creating the ProductService instance. Finally, this ProductService instance is injected into the HomeController when creating the HomeController instance. The dependency resolver has done its job for this incoming request.

Subsequently, the Index() method of the HomeControler is called and the HomeController can use the injected ProductService to retrieve a list of products, which can then be displayed in the browser.

This is how the dependency graph of your application looks:

ECommerce_dependencies

MvcApplication (found in the Global.asax file) acts as the composition root taking care of object composition. The business component has no dependencies to other components, so the Dependency Inversion Principle is still respected.

Unit Testing Made Easy – DI part 3

I claimed in a previous post that low coupling using dependency injection made the code base more testable – i.e. properly prepared for unit testing. Let’s dig a bit deeper into that assertion.

The ProductService class is an obvious candidate for unit testing. It is a relatively small component with a well-defined responsibility (adhering to the Single Responsibility Principle). It is also properly isolated from its dependency (the repository), by an abstraction (the interface). Let’s create a unit test method for the ProductService component:

This unit test method verifies that the ProductService functionality for calculating a discounted price of a product works correctly. It follows the standard sequence of a unit test: First it sets up the fixed baseline environment for the test (also called the test fixture). Then it exercises the system under test (in this case the product service). Finally, it verifies the expected outcome. A “tear down” phase is not necessary, as the fixture objects automatically gets out of scope and will be garbage-collected.

As ProductService does not care about the actual implementation of the product repository dependency, you can inject a “stand-in” for this dependency in the test. This stand-in is better known as a test double. The mockRepository variable holds an instance of such a product repository test double.

In the final application you are probably going to implement the repository so that the products are persisted in for example an SQL database, or maybe a file, but the elegant thing is that, at this moment in time, you do not need to care about this. In the context of the unit test, you can just make a mock implementation of the repository which does not implement persistence of the products at all, but just keep them in memory. This is our test double. Obviously, an implementation like this would never make it into the final application, but it is sufficient to test the ProductService functionality in isolation.

Such a mock implementation of a repository is easily done. Of course you make a generic version that can be used as test double for all entity repositories:

A Dictionary object is used to hold the entities in memory during the test.

Testability is not necessarily the main purpose for doing dependency injection, but the ability to replace dependencies with test-specific mock objects is indeed a very useful by-product.

By the way, the unit test method above is written using the xUnit.net testing framework, which explains the Fact attribute and the Equal assertion. xUnit.net is a nice and very lean testing framework – compared to for example the MSTest, which is the one integrated with Visual Studio. With xUnit.net you don’t need to create a specific test unit project. Also, you get rid of the auto-generated .vsmdi files and .testsettings files from MSTest.

To further refine and automate your unit tests, you should consider using supplementary unit test frameworks like AutoFixture and Moq to help you streamline fixture setup and mocking. Both are available from within the “NuGet Package Manager” Visual Studio Extension. I have written a comprehensive CodeProject article about using xUnit.net, AutoFixture and Moq.

Using Abstractions to Ensure Low Coupling – DI part 2

As shown in my previous post the abstraction of the data layer component handling products – in the form of the IProductRepository interface – was the key to make the business layer independent of the data layer.

Generally, abstractions play a crucial part in ensuring low coupling between software components. These abstractions allow you to define the behaviour of a component without actually caring about the concrete type and implementation behind the abstraction. Low coupling is good for several reasons. It makes your code more extensible, maintainable and, maybe most importantly, more testable. I will come back to the latter in another post.

Of course you cannot entirely decouple everything. Even in the purest POCO component, there will obviously be dependencies to types in the .NET Framework. Rather, you should strive against the “natural” level of decoupling – whatever that is. The abstractions should form well-defined points of interaction – also called seams – between various components in your system with well-defined responsibilities (adhering to the Single Responsibility Principle).

For example, it definitely does make sense to create a seam between a component handling the business functionality of a product and the component that is responsible for the persistence (i.e. between the ProductService and the ProductRepository in the example in my previous post). If you did not introduce a seam here, it would be very difficult to replace for example a SQL Server database implementation in the data layer with some other kind of technology.

Abstractions are typically defined using interfaces. The IProductRepository could for example look like this:

However, as our application is likely to deal with other entities than products (for example customers and orders), it would make sense to generalize this interface using C# generics:

IEntity is a simple interface ensuring that an entity always has a unique ID and a name:

Another fundamental object-oriented design principle is the Interface Segregation Principle:

Clients should not be forced to depend on methods that they do not use.

This basically means that interfaces preferably should be as small and specific as possible. Actually, an interface with a single method can be a very good interface.

You might have some entity services for which you do not want to expose full CRUD functionality – a read-only entity service, so to speak. For this purpose, it would make sense to define a specific IReadOnlyRepository interface:

Then the IRepository interface could be simplified to an extension of the IReadOnlyRepository interface:

This is interface segregation in action. You transformed the one big “header” interface into a number of smaller more specific role interfaces.

Now, you have a nice well-defined seam between the product service – or any other entity service – and the corresponding repository.

Think Business First – DI part 1

When sitting with a blank Visual Studio project in front of you, ready to start up that new killer application of yours, where do you actually start?

Almost every programmer has heard about and used the highly acclaimed 3-layered architecture:

image

My guess is that most developers have a tendency to start developing either from the top with the presentation layer or from the bottom with the data layer. However, even if it is understandable to have some initial thoughts about the presentation layer and the data layer, the recommended starting point is the business layer – or the domain layer as some would call it.

The business layer is the natural starting point, because the business layer should have no dependencies to either the presentation layer or the data layer – or any 3rd party libraries for that matter. It should be possible to build and test the business layer in total isolation from the two other layers.

This might sound a bit awkward to some, because most developers probably have made tons of projects where the business layer is depending on the data layer. I know for sure that I have… But this is not the right way to do it. The business layer should consist of nothing else than POCO’s (Plain Old CLR Objects) – or POJO’s (Plain Old Java Objects) if you are in the Java world – and abstractions. This means that the business layer should not reference anything else than the basic .NET framework libraries (i.e. System, System.Core etc. – those files that are automatically added to your project file references when you create a new project in Visual Studio). No other dependencies whatsoever. Period!

Why is this so important? Because, besides simplifying establishing unit tests, it dramatically increases the maintainability and reusability of the whole code base.

Doing it wrong…

Let’s say you have a classical application with some sort of product service in the business layer that can perform CRUD operations on products in a repository in the data layer. This very often leads to a dependency graph like the one shown below:

image

This is because somewhere in the product service you will “new” up a dependency to the product repository:

So, you have a situation where the higher level component (the business layer) depends directly on a lower level component (the data layer). This has at least the following disadvantages:

  • The product service cannot be reused in isolation in another context
  • The product service cannot be unit tested in isolation
  • One product repository implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example an XML file for persistence)

Doing it Right…

Let’s say that instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

Now the dependency graph looks like this:

image

An abstraction of the product repository in form of an interface is introduced. It is the high level component (the business layer) that owns this abstraction. The product repository is an implementation of this interface.

But now the data layer is depending on the business layer? Yes, indeed. Everything is turned upside-down, because now this object graph adheres to the Dependency Inversion Principle (DIP):

High-level modules should not depend on low-level modules. Both should depend on abstractions.

Abstractions should not depend upon details. Details should depend upon abstractions.

DIP was first introduced by Robert C. Martin (Uncle Bob) back in the mid 90’s and is one of the main principles of object-oriented design.

Instead of directly creating an instance of the repository using the new keyword, the product service has relinquished control of the repository dependency and delegated that responsibility to a third party control. This technique is called Constructor Injection – a sub-pattern of Dependency Injection. How this instance of the repository is created and by whom is of no concern to the product service. You can push this burden all the way to the top of the application into a single component – what is referred to by Mark Seemann in his book Dependency Injection in .NET as the Composition Root of the application. Depending on the type of application, the actual composition root can vary, but more about this in the last post in this series.

Anyway, now the product service can be easily reused in another context and unit tested in isolation, because it is no longer depending on any particular implementation of the product repository. In the unit test, the fixture of the test class itself can act at the composition root creating an instance of some mock repository that can be injected into the product service. Furthermore, an alternative implementation of the repository can be easily created and used in the application.

So, it all begins with the business layer. During the lifetime of the project, make it a habit to regularly check those project references in the business layer project file to make sure no reference to some obscure 3rd party library – or even worse, the presentation layer or the data layer – has accidently sneaked in. You should consider using some tool, or even making automatic tests, to check that the POCO’s-policy for the business layer is enforced.