Tag Archives: C#

How to Avoid Producing Legacy Code at the Speed of Typing

This blog post provides a recipe on how to avoid producing legacy code at the speed of typing by using a proper architecture and unit testing.

Introduction

As an enterprise software developer, you are in a constant fight against producing legacy code – code that is no longer worthy of maintenance or support. You are constantly struggling to avoid re-writing stuff repeatedly in a faint hope that next time you will get it just right.

The characteristics of legacy code is, among others, poor design and architecture or dependencies on obsolete frameworks or 3rd party components. Here are a few typical examples that you might recognize:

You and your team produced a nice, feature-rich Windows application. Afterwards, you realize that what was really needed was a browser or mobile application. That is when you recognize the tremendous effort it would take to provide an alternative UI to your application, because you have embedded too much domain functionality within the UI itself.

Another scenario might be that you made a backend that is deeply infiltrated in a particular ORM – such as NHibernate or Entity Framework – or highly dependent on a certain RDBMS. At one point, you want to change backend strategy to avoid ORM and use file-based persistence, but then you realize it is practically impossible because your domain functionality and the data layer are tightly coupled.

In both of the above scenarios, you are producing legacy code at the speed of typing.

However, there is still hope. By adapting a few simple techniques and principles, you can change this doomed pattern of yours forever.

The Architectural Evolution

In the following, I will describe 3 phases in a typical architectural evolution for a standard enterprise software developer. Almost any developer will make it to phase 2, but the trick is to make it all the way through phase 3 which will eventually turn you into an architectural Ninja.

Evolution to Ninja

Phase 1 – Doing it Wrong

Most developers have heard about layered architecture, so very often the first attempt on an architecture will look something like this – two layers with separated responsibilities for frontend and backend functionality:

phase_1

So far so good, but quit soon you will realize that it is a tremendous problem that the domain logic of your application is entangled into the platform-dependent frontend and backend.

Phase 2 – A Step Forward

Thus, the next attempt is to introduce a middle layer – a domain layer – comprising the true business functionality of your application:

phase_2

This architecture looks deceptively well-structured and de-coupled. However, it is not. The problem is the red dependency arrow indicating that the domain layer has a hard-wired dependency on the backend – typically, because you are creating instances in the domain layer of backend classes using the new keyword (C# or Java). The domain layer and the backend are tightly coupled. This has numerous disadvantages:

  • The domain layer functionality cannot be reused in isolation in another context. You would have to drag along its dependency, the backend.
  • The domain layer cannot be unit tested in isolation. You would have to involve the dependency, the backend.
  • One backend implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example file persistence).

All of these disadvantages dramatically reduces the potential lifetime of the domain layer. That is why you are producing legacy code at the speed of typing.

Phase 3 – Doing it Right

What you have to do is actually quite simple. You have to turn the direction of that red dependency arrow around. It is a subtle difference, but one that makes all the difference:

phase_3a

This architecture adheres to the Dependency Inversion Principle (DIP) – one of the most important principles of object-oriented design. The point is that once this architecture is established – once the direction of that dependency arrow is turned around – the domain layer dramatically increases its potential lifetime. UI requirements and trends may switch from Windows to browsers or mobile devices, and your preferred persistence mechanism might change from being RDBMS-based to file-based, but now that is all relatively easily exchangeable without modifying the domain layer. Because at this point the frontend as well as backend is de-coupled from the domain layer. Thus, the domain layer becomes a code library that you theoretically never ever have to replace – at least as long as your business domain and overall programming framework remain unchanged. Now, you are efficiently fighting that legacy code.

On a side note, let me give you one simple example on how to implement DIP in practice:

Maybe you have a product service in the domain layer that can perform CRUD operations on products in a repository defined in the backend. This very often leads to a dependency graph like the one shown below, with the dependency arrow pointing in the wrong direction:

DI_1

This is because somewhere in the product service you will “new” up a dependency to the product repository:

To inverse the direction of the dependency using DIP, you must introduce an abstraction of the product repository in form of an IProductRepository interface in the domain layer and let the product repository be an implementation of this interface:

DI_2

Now, instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

This is known as dependency injection (DI). I have previously explained this in much more detail in a blog post called Think Business First.

Once you have established the correct overall architecture, the objective of the fight against legacy code should be obvious: move as much functionality as you can into the domain layer. Make those frontend and backend layers shrink and make that domain layer grow fat:

phase_3b

A very convenient bi-product of this architecture is that it makes it easy to establish unit tests of the domain functionality. Because of the de-coupled nature of the domain layer and the fact that all of its dependencies are represented by abstractions (such as an interface or an abstract base class), it is quite easy to establish fake objects of these abstractions and use them when establishing unit test fixtures. So it is “a walk in the park” to guard the entire domain layer with unit tests. You should strive for nothing less than a 100% unit test coverage – making your domain layer extremely robust and solid as a rock. Which again will increase the lifetime of the domain layer.

You are probably starting to realize that not only traditional frontends or backends, but all other components – including the unit tests or for example an http-based Web API – should act as consumers of the domain layer. Thus, it makes a lot of sense to depict the architecture as onion layers:

phase_3c

The outer layer components consume the domain library code – either by providing concrete implementations of domain abstractions (interfaces or base classes) or as a direct consumer of domain functionality (domain model and services).

However, still remember: the direction of coupling is always toward the center – toward the domain layer.

At this point, it might all seem a bit theoretic and, well…, abstract. Nevertheless, it does not take a lot to do this in practice. In another CodeProject article of mine I have described and provided some sample code that complies with all of the principles in this article. The sample code is simple, yet very close to real production code.

Summary

Being an enterprise software developer is a constant battle to avoid producing legacy code at the speed of typing. To prevail, do the following:

  • Make sure all those dependency arrows point toward the central and independent domain layer by applying the Dependency Inversion Principle (DIP) and Dependency Injection (DI).
  • Constantly nourish the domain layer by moving as much functionality as possible into it. Make that domain layer grow fat and heavy while shrinking the outer layers.
  • Cover every single functionality of the domain layer by unit tests.

Follow these simple rules and it will all come together. The code that you write will potentially have a dramatically longer lifetime than before because:

  • The domain layer functionality can be reused in many different contexts.
  • The domain layer can be made robust and solid as a rock with a 100% unit test coverage.
  • Implementations of domain layer abstractions (for example persistence mechanisms) can easily be replaced by alternative implementations.
  • The domain layer is easy to maintain.

How I came to love COM interoperability

Well, maybe the title of this post is slightly exaggerated, but this is the story about how I – despite strong reluctance to do so – successfully managed to expose a .NET library of mine to COM. Furthermore, during this process, my original .NET library became even better and I actually ended up kind of liking the additional COM API.

Obviously, as most other .NET developers, I have had to deal with the tremendous amount of unmanaged code out there. Even if I would rather avoid it, from time to time, I have had to use the .NET interoperability services to consume some ActiveX component or other type of unmanaged legacy code. And I have learned to live with it. But why would someone ever think of exposing a nice and clean .NET library for a development platform (COM) that was deprecated decades ago?

Anyway, recently I found myself in a situation where I was left no choice. I had made this terrific .NET library with loads of nice functionality when the client required that the same functionality was made available through COM.

There are lots of explanations out there on the Internet on how to expose .NET assemblies to COM – for example this article on CodeProject. The reason why I bother to write this post anyway is that none of the resources that I have found on the Internet describes the exact approach that I eventually chose for my project. Also, I did not find any resources giving an overview of all the small challenges I met in the process. And as we all know, the devil is in the detail.

The Example Code

The example code for this post is based on the legendary “Hello World!” example. Compared to the normally very simple structure of such examples, my code might seem unnecessarily complicated to overcome the simple task of displaying the famous text message, but still it is relatively simple and elegantly illustrates all of the challenges that I met when working with the real code base.

The central class in the example library is a Greeter class that has a dependency to an IMessageWriter instance that is injected into the Greeter class through the constructor (yes, this is dependency injection in action):

The IMessageWriter instance is used in the Greet method to write the message. The GreetingType decides the exact phrasing of the greeting (much more about this later). The IMessageWriter interface contains a single Write method:

The example library comes with a single concrete implementation of the IMessageWriter interface – a ConsoleMessageWriter that writes a text message to the console:

From a console application the following code creates a silly greeting:

The Overall Approach

Now, let’s dig into the matter. Probably because of my initial reluctance to deal with the COM interoperability at all, I decided to make a clear rule for myself – a self-imposed dogma so to speak. I would under no circumstances “pollute” my original .NET library with any COM-related stuff such as COM-specific interfaces or any of the ComVisible-, Guid– or ClassInterface attributes. I would allow no references to the System.Runtime.InteropServices namespace whatsoever. Also, I would not accept major degradations of my original library. So I ended up with a project structure like this:

dependencies

All the COM-specific stuff is encapsulated in the ClassLibrary.Interop assembly while my original ClassLibrary assembly remains a clean .NET library.

In the ClassLibrary.Interop assembly I explicitly define all the COM interfaces and decorate them with the Guid attribute:

Furthermore I create new classes inheriting from the original ones and implementing the corresponding explicitly defined COM interface. I decorate the classes with the Guid attribute and the ClassInterface attribute with the ClassInterfaceType.None parameter. The ClassInterfaceType.None parameter prevents the class interface from being automatically generated when the class metadata is exported to a COM type library. So in the below example, only the members of the IGreeter interface will be exposed:

I don’t bother decorating the individual classes with the ComVisible attribute because the whole point is that in the ClassLibrary.Interop assembly I only deal with .NET types that I want to expose for COM, so instead I declare this once and for all in the AssemblyInfo file:

Dealing with the Challenges

As mentioned earlier, I met a few challenges on the way – mostly because of .NET/C# features that are not supported in COM. In the following, I will describe the individual challenges and the solutions to them.

Constructors with parameters

COM does not support constructors with parameters. COM requires default (parameterless) constructors.

As shown earlier, the Greeter class uses dependency injection and requires an instance of an IMessageWriter interface provided through its constructor:

So what I did was that I created an additional protected parameterless default constructor and a protected MessageWriter property. The fact that these two additional members are protected is an important point because then I can use them from my Greeter extension class in the ClassLibrary.Interop assembly to provide COM interoperability while still hiding these members from “normal” use of the Greeter class within the .NET Framework – thus forcing the consumer to use the public constructor:

Then I can introduce an Initialize method in the COM interface of the Greeter class and use this method to set the MessageWriter property.

So now, from a COM consumer I will have to first create the Greeter object using the default constructor and then call the Initialize method.

Overloaded methods

Overloaded methods are not supported in COM. In the Greeter class I do have two Greet methods with different signatures – one always making a neutral greeting and one where I can provide a specific greeting type as a parameter:

The only way to deal with this problem is to introduce different names in the COM interface:

Generics

.NET generics is gibberish for COM. So if you have made any generic classes or methods or if you use any of the built-in generic types then you have to be a bit creative. In the Greeter class I am using the generic ReadOnlyCollection<> to keep the greeting history:

The solution to this problem is pretty straight forward. Simply let the Greeter extension in the ClassLibrary.Interop assembly return an arrays of strings instead:

Inheritance

One challenge that I met is in a different category than the others. This challenge was not due to the missing COM support for certain .NET/C# features. Rather, it was due to my self-imposed dogma about keeping my original .NET library free of COM-related stuff. As I wanted to extent the original .NET types with COM interoperability using inheritance, only inheritable types could be extended. .NET types like struct and enum are not inheritable.

So I had to change a couple of structs to classes in my original library, which didn’t really bother me too much.

The enums, however, were a bit trickier. What I did was to introduce my own Enumeration class instead of using enums. This was one of the changes that I actually consider a major improvement to my original code. I have always found it annoying that enums could not be extended with for example a display name (for example including spaces). By introducing an Enumeration class, exactly this can be done:

The whole discussion about using enumeration classes instead of enums is worth a post by itself, but another advantage worth mentioning is that this approach can reduce the number of switch statements that inevitably follows from the usage of enums. Look how elegantly the greeting text, in the form of the Greeting property, has become a detail of a greeting type:

Now the individual greeting types can be defined, e.g. a neutral greeting type:

Or a silly greeting type:

Static methods

The GreetingType enumeration class brings us to the last of the challenges. In the GreetingType enumeration class I define 3 static methods – one for each of the greeting types.

But unfortunately static methods are not supported in COM. So, for the COM interface I have to expose the 3 greeting type classes instead – here illustrated by the GreetingTypeCasual class:

This is why I had to make the original greeting types public. If I wasn’t going to expose my assembly to COM, I would have made the GreetingTypeNeutral (and the other greeting types) internal – or even private classes within the GreetingType class.

COM Registration

When all challenges are overcome and the ClassLibrary.Interop assembly is ready, it must be properly registered.

In my ClassLibrary.Interop project I have checked the “Register for Com interop” option under the projects Build properties. This will do the trick on your own machine.

If you want to deploy the COM version of the library to other machines, you have to use the assembly registration tool RegAsm. If you call it from a Windows batch file placed in the same folder as the assembly itself, you can for example use the following syntax:

This approach requires that the assembly is signed with a strong name (even if not put in the GAC).

My guess is that most COM consumers run in 32 bit. If you want to register for 64 bit consumers, you should call the 64 bit version of RegAsm found in c:\Windows\Microsoft.NET\Framework64.

VBA Sample

And finally, here is some sample code using the COM API from a Visual Basic for Applications (VBA) macro:

Summary

This post describes an approach for exposing a .NET assembly to COM by handling all the COM-specifics in a dedicated ClassLibrary.Interop assembly without having to compromise the original ClassLibrary assembly.

Exposing .NET assembly functionality to COM does not necessarily need to be a hassle. Yes, there are indeed some challenges to overcome and, for sure, my personal preference will always be to use the .NET assembly directly. However, I do see some advantages in providing a dedicated COM API acting as a sort of “higher level” scripting API for other than hardcore .NET programmers. I kind of like the way that the explicitly defined COM interfaces in the ClassLibrary.Interop assembly acts as a facade to the full functionality, and how for example abstract base classes and interfaces are hidden to the COM API user.

The source code can be downloaded from my CodeProject article .

Lightweight Domain Services Library

If you have more than a few years of experience within domain-driven design (DDD), most certainly, you have recognized some kind of overall pattern in the type of problems you have to solve – regardless of the type of applications you are working on. I certainly know that I have.

No matter whether you develop desktop applications, web applications or web API’s, you will almost always find yourself in a situation where you have to establish a mechanism for creating, persisting and maintaining state of various entities in the application domain model. So, every time you start up a new project you have to do a lot of yak shaving to establish this persistence mechanism, when what you really want to do is to work on establishing the domain model – the actual business functionality of your application.

After several iterations through various projects, I have established a practice that works for me in almost any situation. This practice allows you to abstract the entity persistence (the yak shaving…) so that you can easily isolate the nitty-gritty implementation details of this and focus on developing your genuine business functionality. Eventually, of course you have to deal with the implementation of the persistence layer, but the value of being able to develop – and not at least test – your domain model in isolation without having to care about the persistence details is tremendous. Then, you can start out with developing and testing your domain model against fake repositories. Whether you eventually end up making simple file based repositories or decide to go full-blown RDBMS doesn’t matter at this point in time.

I have digested this practice of mine into something I call a Domain Services Library and written a CodeProject article about this. This framework is super lightweight comprising only a few plain vanilla C# classes. No ORM is involved – the repositories can be anything from in-memory objects to RDBMS. No 3rd party dependencies whatsoever. Source code download is provided in the article.

Putting it all Together – DI part 4

Dependency Injection – and the low coupling between components that it leads to – goes hand in hand with high cohesion. It is now time to grab the individual components and put them together to form a “real” application.

In the first post in this series, it was explained how the whole point of dependency injection is to remove the burden of composing objects away from the individual components themselves, and instead delegate this responsibility to a single well-defined location as close as possible to the entry point of the application – also denoted the composition root of the application.

This object composition can very well be done manually by simply “newing” up all the objects – which is sometimes referred to as “poor man’s DI” – but a good alternative is to leave the responsibility of solving the object graph to a DI container. A DI container is a third-party library that can automate the object composition and lifetime management. Furthermore, some DI containers support runtime interception which is a very powerful technique for solving cross-cutting concerns such as logging or authorization (more about this in a later post).

And yes, when using a DI container you are, ironically enough, introducing a new dependency to solve the dependencies! But obviously, the DI container object itself should be created manually, and the DI container library should only be referenced from the composition root.

Anyway, here is an example of wiring up the application using Microsoft’s DI container called Unity in an ASP.NET MVC application. Adding a reference to the Unity.Mvc3 library (for example using the NuGet Package Manager) will automatically create a static helper class called Bootstrapper. In the BuildUnityContainer() method you need to register which concrete type should be mapped to the IRepository<Product> abstraction during run time. In this case an XmlProducRepository class is used. XmlProductRepository itself has a dependency to a string defining the path to the XML file used as physical repository.

To use ProductService in one of the controllers (e.g. the HomeController), you need to inject ProductService using constructor injection:

That’s it. The DI container takes care of the rest.

The first time you are introduced to the concept of DI containers you might become a bit mystified, and even worried, about all the “magic” that apparently goes on behind the scenes. I certainly know that I was. However, trying to dig a bit deeper into what actually goes on might help on this scepticism. This is what happens during an incoming request to go to the Home page of the application:

ECommerce2_Rose

MvcApplication receives a request to go to Home the page. The DependencyResolver is asked to resolve HomeControler (i.e. create the whole object graph) – and this is where the magic starts! The dependency resolver detects the dependencies (HomeControler -> ProductService -> IRepository<Product> -> string) and starts creating the object graph from the bottom and up. First an instance of ProductRepository is created. During registration you declared that this was the concrete type to be used for the IRepository<Product> abstraction. You also declared the path to the physical file “c:\data\repository.xml” during registration. Then this ProductRepository instance is injected into ProductService, using constructor injection, when creating the ProductService instance. Finally, this ProductService instance is injected into the HomeController when creating the HomeController instance. The dependency resolver has done its job for this incoming request.

Subsequently, the Index() method of the HomeControler is called and the HomeController can use the injected ProductService to retrieve a list of products, which can then be displayed in the browser.

This is how the dependency graph of your application looks:

ECommerce_dependencies

MvcApplication (found in the Global.asax file) acts as the composition root taking care of object composition. The business component has no dependencies to other components, so the Dependency Inversion Principle is still respected.

Unit Testing Made Easy – DI part 3

I claimed in a previous post that low coupling using dependency injection made the code base more testable – i.e. properly prepared for unit testing. Let’s dig a bit deeper into that assertion.

The ProductService class is an obvious candidate for unit testing. It is a relatively small component with a well-defined responsibility (adhering to the Single Responsibility Principle). It is also properly isolated from its dependency (the repository), by an abstraction (the interface). Let’s create a unit test method for the ProductService component:

This unit test method verifies that the ProductService functionality for calculating a discounted price of a product works correctly. It follows the standard sequence of a unit test: First it sets up the fixed baseline environment for the test (also called the test fixture). Then it exercises the system under test (in this case the product service). Finally, it verifies the expected outcome. A “tear down” phase is not necessary, as the fixture objects automatically gets out of scope and will be garbage-collected.

As ProductService does not care about the actual implementation of the product repository dependency, you can inject a “stand-in” for this dependency in the test. This stand-in is better known as a test double. The mockRepository variable holds an instance of such a product repository test double.

In the final application you are probably going to implement the repository so that the products are persisted in for example an SQL database, or maybe a file, but the elegant thing is that, at this moment in time, you do not need to care about this. In the context of the unit test, you can just make a mock implementation of the repository which does not implement persistence of the products at all, but just keep them in memory. This is our test double. Obviously, an implementation like this would never make it into the final application, but it is sufficient to test the ProductService functionality in isolation.

Such a mock implementation of a repository is easily done. Of course you make a generic version that can be used as test double for all entity repositories:

A Dictionary object is used to hold the entities in memory during the test.

Testability is not necessarily the main purpose for doing dependency injection, but the ability to replace dependencies with test-specific mock objects is indeed a very useful by-product.

By the way, the unit test method above is written using the xUnit.net testing framework, which explains the Fact attribute and the Equal assertion. xUnit.net is a nice and very lean testing framework – compared to for example the MSTest, which is the one integrated with Visual Studio. With xUnit.net you don’t need to create a specific test unit project. Also, you get rid of the auto-generated .vsmdi files and .testsettings files from MSTest.

To further refine and automate your unit tests, you should consider using supplementary unit test frameworks like AutoFixture and Moq to help you streamline fixture setup and mocking. Both are available from within the “NuGet Package Manager” Visual Studio Extension. I have written a comprehensive CodeProject article about using xUnit.net, AutoFixture and Moq.

Using Abstractions to Ensure Low Coupling – DI part 2

As shown in my previous post the abstraction of the data layer component handling products – in the form of the IProductRepository interface – was the key to make the business layer independent of the data layer.

Generally, abstractions play a crucial part in ensuring low coupling between software components. These abstractions allow you to define the behaviour of a component without actually caring about the concrete type and implementation behind the abstraction. Low coupling is good for several reasons. It makes your code more extensible, maintainable and, maybe most importantly, more testable. I will come back to the latter in another post.

Of course you cannot entirely decouple everything. Even in the purest POCO component, there will obviously be dependencies to types in the .NET Framework. Rather, you should strive against the “natural” level of decoupling – whatever that is. The abstractions should form well-defined points of interaction – also called seams – between various components in your system with well-defined responsibilities (adhering to the Single Responsibility Principle).

For example, it definitely does make sense to create a seam between a component handling the business functionality of a product and the component that is responsible for the persistence (i.e. between the ProductService and the ProductRepository in the example in my previous post). If you did not introduce a seam here, it would be very difficult to replace for example a SQL Server database implementation in the data layer with some other kind of technology.

Abstractions are typically defined using interfaces. The IProductRepository could for example look like this:

However, as our application is likely to deal with other entities than products (for example customers and orders), it would make sense to generalize this interface using C# generics:

IEntity is a simple interface ensuring that an entity always has a unique ID and a name:

Another fundamental object-oriented design principle is the Interface Segregation Principle:

Clients should not be forced to depend on methods that they do not use.

This basically means that interfaces preferably should be as small and specific as possible. Actually, an interface with a single method can be a very good interface.

You might have some entity services for which you do not want to expose full CRUD functionality – a read-only entity service, so to speak. For this purpose, it would make sense to define a specific IReadOnlyRepository interface:

Then the IRepository interface could be simplified to an extension of the IReadOnlyRepository interface:

This is interface segregation in action. You transformed the one big “header” interface into a number of smaller more specific role interfaces.

Now, you have a nice well-defined seam between the product service – or any other entity service – and the corresponding repository.

Think Business First – DI part 1

When sitting with a blank Visual Studio project in front of you, ready to start up that new killer application of yours, where do you actually start?

Almost every programmer has heard about and used the highly acclaimed 3-layered architecture:

image

My guess is that most developers have a tendency to start developing either from the top with the presentation layer or from the bottom with the data layer. However, even if it is understandable to have some initial thoughts about the presentation layer and the data layer, the recommended starting point is the business layer – or the domain layer as some would call it.

The business layer is the natural starting point, because the business layer should have no dependencies to either the presentation layer or the data layer – or any 3rd party libraries for that matter. It should be possible to build and test the business layer in total isolation from the two other layers.

This might sound a bit awkward to some, because most developers probably have made tons of projects where the business layer is depending on the data layer. I know for sure that I have… But this is not the right way to do it. The business layer should consist of nothing else than POCO’s (Plain Old CLR Objects) – or POJO’s (Plain Old Java Objects) if you are in the Java world – and abstractions. This means that the business layer should not reference anything else than the basic .NET framework libraries (i.e. System, System.Core etc. – those files that are automatically added to your project file references when you create a new project in Visual Studio). No other dependencies whatsoever. Period!

Why is this so important? Because, besides simplifying establishing unit tests, it dramatically increases the maintainability and reusability of the whole code base.

Doing it wrong…

Let’s say you have a classical application with some sort of product service in the business layer that can perform CRUD operations on products in a repository in the data layer. This very often leads to a dependency graph like the one shown below:

image

This is because somewhere in the product service you will “new” up a dependency to the product repository:

So, you have a situation where the higher level component (the business layer) depends directly on a lower level component (the data layer). This has at least the following disadvantages:

  • The product service cannot be reused in isolation in another context
  • The product service cannot be unit tested in isolation
  • One product repository implementation (using for example a RDBMS for persistence) cannot easily be replaced by another implementation (using for example an XML file for persistence)

Doing it Right…

Let’s say that instead of “newing” up an instance of the product repository in the product service, then you inject the repository into the service through a constructor argument:

Now the dependency graph looks like this:

image

An abstraction of the product repository in form of an interface is introduced. It is the high level component (the business layer) that owns this abstraction. The product repository is an implementation of this interface.

But now the data layer is depending on the business layer? Yes, indeed. Everything is turned upside-down, because now this object graph adheres to the Dependency Inversion Principle (DIP):

High-level modules should not depend on low-level modules. Both should depend on abstractions.

Abstractions should not depend upon details. Details should depend upon abstractions.

DIP was first introduced by Robert C. Martin (Uncle Bob) back in the mid 90’s and is one of the main principles of object-oriented design.

Instead of directly creating an instance of the repository using the new keyword, the product service has relinquished control of the repository dependency and delegated that responsibility to a third party control. This technique is called Constructor Injection – a sub-pattern of Dependency Injection. How this instance of the repository is created and by whom is of no concern to the product service. You can push this burden all the way to the top of the application into a single component – what is referred to by Mark Seemann in his book Dependency Injection in .NET as the Composition Root of the application. Depending on the type of application, the actual composition root can vary, but more about this in the last post in this series.

Anyway, now the product service can be easily reused in another context and unit tested in isolation, because it is no longer depending on any particular implementation of the product repository. In the unit test, the fixture of the test class itself can act at the composition root creating an instance of some mock repository that can be injected into the product service. Furthermore, an alternative implementation of the repository can be easily created and used in the application.

So, it all begins with the business layer. During the lifetime of the project, make it a habit to regularly check those project references in the business layer project file to make sure no reference to some obscure 3rd party library – or even worse, the presentation layer or the data layer – has accidently sneaked in. You should consider using some tool, or even making automatic tests, to check that the POCO’s-policy for the business layer is enforced.