Test Driven Development – Sample

In this post, I’m going to create a small program and show how to perform Test Driven Development (TDD) to make your life easier as a programmer.  Instead of creating the typical throw-away program, I’m going to do something a little more complicated.  I’m going to create a .Net Core Web API application that calls another API to get an address from a database.  The “Other” API will not be presented in this sample post because we don’t need the other API in order to write our code.  I’m going to show how to setup unit tests to pretend like there are results coming back from another API and then write the code based on the unit tests.  In real life, this scenario can happen when parallel development efforts occur.  Sometimes a fake API must be developed to finish the work being performed to access the API.  In this case, I’m going to skip the fake API part and just mock the call to the API and feed sample data back.

To get started, all we need is an empty business logic project and a unit test project.  We don’t need to wire-up any of the API front end stuff because we’re going to exercise the business logic from unit tests.  Here’s the scenario:

  1. Our API will accept an address in JSON format, plus a person id from a database.
  2. The result will be true if the database contains the same address information submitted to the API for the person id given.

Ultimately, we’ll have a business object that is instantiated by the IOC container.  Let’s focus on the business object and see if we can come up with an empty shell.  For now we’ll assume that we need an address to compare with and the person id.

public class CheckAddress
{
  public bool IsEqual(int personId, AddressPoco currentAddress)
  {
    
  }
}

There will be a POCO for the address information.  We’ll use the same POCO for the data returned as for the method above.:

public class AddressPoco
{
  public string Address { get; set; }
  public string City { get; set; }
  public string State { get; set; }
  public string Zip { get; set; }
}

So far, so good.  Inside the IsEqual() method in the CheckAddress class above, we’re going to call our address lookup API and then compare the result with the “currentAddress” value.  If they are equal, then we’ll return true.  Otherwise false.  To call another API, we could write an object like this:

public class AddressApiLookup
{
  private const string Url = "http://myurl.com";

  public AddressPoco Get(int personId)
  {
    using (var webClient = new WebClient())
    {
      webClient.Headers["Accept-Encoding"] = "UTF-8";
      webClient.Headers["Content-Type"] = "application/json";

      var arr = webClient.DownloadData(new Uri($"{Url}/api/GetAddress/{personId}"));
      return JsonConvert.DeserializeObject<AddressPoco>(Encoding.ASCII.GetString(arr));
    }
  }
}

In our IOC container, we’ll need to make sure that we break dependencies with the AddressApiLookup, which means we’ll need an interface.  We’ll also need an interface for our CheckAddress object, but that interface will not be needed for this round of unit tests.  Here’s the interface for the AddressApiLookup object:

public interface IAddressApiLookup
{
  AddressPoco Get(int personId);
}

Now we can mock the AddressApiLookup by using Moq.  Don’t forget to add the interface to the class declaration, like this:

public class AddressApiLookup : IAddressApiLookup

One last change you’ll need to perform: The CheckAddress object will need to have the AddressApiLookup injected in the constructor.  Your IOC container is going to perform the injection for you when your code is complete, but for now, we’re going to inject our mocked object into the constructor.  Change your object to look like this:

public class CheckAddress
{
  private IAddressApiLookup _addressApiLookup;

  public CheckAddress(IAddressApiLookup addressApiLookup)
  {
    _addressApiLookup = addressApiLookup;
  }

  public bool IsEqual(int personId, AddressPoco currentAddress)
  {
    
  }
}

You can setup the usual unit tests, like this:

  1. Both addresses are alike
  2. Addresses are different
  3. No address returned form remote API

You’ll probably want to test other scenarios like a 500 error, but you’ll need to change the behavior of the API calling method to make sure you return the result code.  We’ll stick to the simple unit tests for this post.  Here is the first unit test:

[Fact]
public void EqualAddresses()
{
  // arrange
  var address = new AddressPoco
  {
    Address = "123 main st",
    City = "Baltimore",
    State = "MD",
    Zip = "12345"
  };

  var addressApiLookup = new Mock<IAddressApiLookup>();
  addressApiLookup.Setup(x => x.Get(1)).Returns(address);

  // act
  var checkAddress = new CheckAddress(addressApiLookup.Object);
  var result = checkAddress.IsEqual(1, address);

  //assert
  Assert.True(result);
}

In the arrange segment, the address POCO is populated with some dummy data.  This data is used by both the API call (the mock call) and the CheckAddress object.  That guarantees that we get an equal result.  We’ll use “1” as the person id, that means that we’ll need to use “1” in the mock setup and “1” in the call to the IsEqual method.  Otherwise, we can code the setup to use “It.IsAny<int>()” as a matching input parameter and any number in the IsEqual method call.

The act section creates an instance of the CheckAddress object and injects the mocked AddressApiLookup object.  Then the result is obtained from a call to the IsEqual with the same address passed as a parameter.  The assert just checks to make sure it’s all true.

If you execute your unit tests here, you’ll get a failure.  For now, let’s go ahead and write the other two unit tests:

[Fact]
public void DifferentAddresses()
{
  // arrange
  var addressFromApi = new AddressPoco
  {
    Address = "555 Bridge St",
    City = "Washington",
    State = "DC",
    Zip = "22334"
  };

  var address = new AddressPoco
  {
    Address = "123 main st",
    City = "Baltimore",
    State = "MD",
    Zip = "12345"
  };

  var addressApiLookup = new Mock<IAddressApiLookup>();
  addressApiLookup.Setup(x => x.Get(1)).Returns(addressFromApi);

  // act
  var checkAddress = new CheckAddress(addressApiLookup.Object);
  var result = checkAddress.IsEqual(1, address);

  //assert
  Assert.False(result);
}

[Fact]
public void NoAddressFound()
{
  // arrange
  var addressFromApi = new AddressPoco
  {
  };

  var address = new AddressPoco
  {
    Address = "123 main st",
    City = "Baltimore",
    State = "MD",
    Zip = "12345"
  };

  var addressApiLookup = new Mock<IAddressApiLookup>();
  addressApiLookup.Setup(x => x.Get(1)).Returns(addressFromApi);

  // act
  var checkAddress = new CheckAddress(addressApiLookup.Object);
  var result = checkAddress.IsEqual(1, address);

  //assert
  Assert.False(result);
}

In the DifferentAddress test, I had to setup two addresses, one to be returned by the mocked object and one to be fed into the IsEqual method call.  For the final unit test, I created an empty POCO for the address used by the API.

Now the only task to perform is to create the code to make all the tests pass.  To perform TDD to the letter, you would create the first unit test and then insert code to make that unit test pass.  For that case, you can just return a true and the first unit test will pass.  Then create the second unit test and refactor the code to make that unit test pass.  Writing two or more unit tests before you start creating code can sometimes save you the time of creating a trivial code solution.  That’s what I’ve done here.  So let’s take a stab at writing the code:

public bool IsEqual(int personId, AddressPoco currentAddress)
{
  return currentAddress == _addressApiLookup.Get(personId);
}

Next, run your unit tests and they will all pass.

Except, there is one possible problem with the tests that were created: The equality might be checking only the memory address to the address POCO instance.  In that case, the equal unit test might not be testing if the data inside the object is the same.  So let’s change the equal address unit test to use two different instances of address with the same address (copy one of them and change the name):

[Fact]
public void EqualAddresses()
{
  // arrange
  var addressFromApi = new AddressPoco
  {
    Address = "123 main st",
    City = "Baltimore",
    State = "MD",
    Zip = "12345"
  };
  var address = new AddressPoco
  {
    Address = "123 main st",
    City = "Baltimore",
    State = "MD",
    Zip = "12345"
  };

  var addressApiLookup = new Mock<IAddressApiLookup>();
  addressApiLookup.Setup(x => x.Get(1)).Returns(addressFromApi);

  // act
  var checkAddress = new CheckAddress(addressApiLookup.Object);
  var result = checkAddress.IsEqual(1, address);

  //assert
  Assert.True(result);
}

Now, if you run the unit tests you’ll get the following result:

Aha!  Just as I suspected.  This means that we need to refactor our method to properly compare the two POCO objects.  You’ll have to implement IComparable inside the AddressPoco:

public class AddressPoco : IComparable
{
    public string Address { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Zip { get; set; }

    public int CompareTo(AddressPoco other)
    {
        if (ReferenceEquals(this, other)) return 0;
        if (ReferenceEquals(null, other)) return 1;
        var addressComparison = string.Compare(Address, other.Address, StringComparison.Ordinal);
        if (addressComparison != 0) return addressComparison;
        var cityComparison = string.Compare(City, other.City, StringComparison.Ordinal);
        if (cityComparison != 0) return cityComparison;
        var stateComparison = string.Compare(State, other.State, StringComparison.Ordinal);
        if (stateComparison != 0) return stateComparison;
        return string.Compare(Zip, other.Zip, StringComparison.Ordinal);
    }
}

I have ReSharper installed on my machine, so it has an auto-generate CompareTo method.  That auto-generate created the method and the code inside it.  You can also override the equals and use the equal sign, but this is simpler.  Next, you’ll have to modify the IsEqual() method to use the CompareTo() method:

public bool IsEqual(int personId, AddressPoco currentAddress)
{
  return currentAddress.CompareTo(_addressApiLookup.Get(personId)) == 0;
}

Now run your unit tests:

Where to Get the Sample Code

You can go to my GitHub account to download the sample code used in this blog post by going here.  I swear by ReSharper and I have purchased the Ultimate version (so I can use the unit test coverage tool).  This product for an individual is approximately $150 (first time) or less for the upgrade price.  ReSharper is one of the best software products I’ve ever bought.

 

Unit Testing EF Data With Moq

Introduction

I’ve discussed using the in-memory Entity Framework unit tests in a previous post (here).  In this post, I’m going to demonstrate a simple way to use Moq to unit test a method that uses Entity Framework Core.

Setup

For this sample, I used the POCOs, context and config files from this project (here).  You can copy the cs files from that project, or you can just download the sample project from GitHub at the end of this article.

You’ll need several parts to make your unit tests work:

  1. IOC container – Not in this post
  2. List object to DbSet Moq method
  3. Test data
  4. Context Interface

I found a method on stack overflow (here) that I use everywhere.  I created a unit test helper static object and placed it in my unit test project:

public class UnitTestHelpers
{
  public static DbSet<T> GetQueryableMockDbSet<T>(List<T> sourceList) where T : class
  {
    var queryable = sourceList.AsQueryable();

    var dbSet = new Mock<DbSet<T>>();
    dbSet.As<IQueryable<T>>().Setup(m => m.Provider).Returns(queryable.Provider);
    dbSet.As<IQueryable<T>>().Setup(m => m.Expression).Returns(queryable.Expression);
    dbSet.As<IQueryable<T>>().Setup(m => m.ElementType).Returns(queryable.ElementType);
    dbSet.As<IQueryable<T>>().Setup(m => m.GetEnumerator()).Returns(() => queryable.GetEnumerator());
    dbSet.Setup(d => d.Add(It.IsAny<T>())).Callback<T>(sourceList.Add);

    return dbSet.Object;
  }
}

The next piece is the pretend data that you will use to test your method.  You’ll want to keep this as simple as possible.  In my implementation, I allow for multiple data sets.

public static class ProductTestData
{
  public static List Get(int dataSetNumber)
  {
    switch (dataSetNumber)
    {
      case 1:
      return new List
      {
        new Product
        {
          Id=0,
          Store = 1,
          Name = "Cheese",
          Price = 2.5m
        },
        ...

      };
    }
    return null;
  }
}

Now you can setup a unit test and use Moq to create a mock up of your data and then call your method under test.  First, let’s take a look at the method and see what we want to test:

public class ProductList
{
  private readonly IDatabaseContext _databaseContext;

  public ProductList(IDatabaseContext databaseContext)
  {
    _databaseContext = databaseContext;
  }

  public List GetTopTen()
  {
    var result = (from p in _databaseContext.Products select p).Take(10).ToList();

    return result;
  }
}

The ProductList class will be setup from an IOC container.  It has a dependency on the databaseContext object.  That object will be injected by the IOC container using the class constructor.  In my sample code, I set up the class for this standard pattern.  For unit testing purposes, we don’t need the IOC container, we’ll just inject our mocked up context into the class when we create an instance of the object.

Let’s mock the context:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();

}

As you can see, Moq uses interfaces to create a mocked object.  This is the only line of code we need for the context mocking.  Next, we’ll mock some data.  We’re going to tell Moq to return data set 1 if the Products getter is called:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(1)));

}

I’m using the GetQueryableMockDbSet unit test helper method in order to convert my list objects into the required DbSet object.  Any time my method tries to read Products from the context, data set 1 will be returned.  This data set contains 12 items.  As you can see from the method that we are going to mock up, there should be only ten items returned.  Let’s add the method under test setup:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(1)));

  var productList = new ProductList(demoDataContext.Object);

  var result = productList.GetTopTen();
  Assert.Equal(10,result.Count);
}

The object under test is very basic, just get an instance and pass the mocked context (you have to use .Object to get the mocked object).  Next, just call the method to test.  Finally, perform an assert to conclude your unit test.  If the productList() method returns an amount that is not ten, then there is an issue (for this data set).  Now, we should test an empty set.  Add this to the test data switch statement:

case 2:
  return new List
  {
  };

Now the unit test:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(2)));

  var productList = new ProductList(demoDataContext.Object);

  var result = productList.GetTopTen();
  Assert.Empty(result.Count);
}

All the work has been done to setup the static test data object, so I only had to add one case to it.  Then the unit test is identical to the previous unit test, except the ProductTestData.Get() has a parameter of 2, instead of 1 representing the data set number.  Finally, I changed the assert to test for an empty set instead of 10.  Execute the tests:

Now you can continue to add unit tests to test for different scenarios.

Where to Get the Code

You can go to my GitHub account and download the sample code (click here).  If you would like to create the sample tables to make this program work (you’ll need to add your own console app to call the GetTopTen() method), you can use the following MS SQL Server script:

CREATE TABLE [dbo].[Store](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Name] [varchar](50) NULL,
	[Address] [varchar](50) NULL,
	[State] [varchar](50) NULL,
	[Zip] [varchar](50) NULL,
 CONSTRAINT [PK_Store] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[Product](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Store] [int] NOT NULL,
	[Name] [varchar](50) NULL,
	[Price] [money] NULL,
 CONSTRAINT [PK_Product] PRIMARY KEY NONCLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[Product]  WITH CHECK ADD  CONSTRAINT [FK_store_product] FOREIGN KEY([Store])
REFERENCES [dbo].[Store] ([Id])
GO

ALTER TABLE [dbo].[Product] CHECK CONSTRAINT [FK_store_product]
GO
 

Unit Tests are not an Option!

Introduction

I’ve been writing software since 1978.  Which is to say that I’ve seen many paradigm changes.  I witnessed the inception of object oriented programming.  I first became aware of objects when I read a Byte magazine article on a language called SmallTalk (issued August 1981).  I read and re-read that article many times to try and understand what the purpose of object oriented programming was.  Object oriented programming took ten more years before programmers began to recognize it.  In the early 90’s the University of Michigan only taught a few classes using object oriented C++.  It was still new and shiny.  Now all languages are object oriented, or they are legacy languages.

The web was another major paradigm that I witnessed.  Before the browser was invented (while I was in college), all programs were written to be executed on the machine it was run on.  I was immersed in the technology of the Internet while I was a student at UofM and we used tools such as Telnet, FTP, Archie, DNS, and Gopher (to name a few), to navigate and find information.  The Internet was primarily composed of data about programming.  When Mosaic came along as well as HTML, the programming world went crazy.  The technology didn’t mature until the early 2000’s.  Many programming languages were thrown together to accommodate the new infrastructure (I’m looking at you “Classic ASP”).

Extreme programming came of age in the late 90’s.  I did not get involved in XP until the mid 2000’s.  Waterfall was the way things were done.  The industry was struggling with automated testing suites.  Unit testing came onto the scene, but breaking dependencies was an unknown quantity.  It took some years before somebody came up with the idea of inversion of control.  The idea was so abstract that most programmers ignored it and moved on.

The latest paradigm change, and it’s much bigger than most will give it credit for is the IOC container.  Even Microsoft has incorporated this technology into their latest tools.  IOC is part of .Net Core.  If you’re a programmer and you haven’t used IOC containers yet, or you don’t understand the underlying reason for it, you better get on the bandwagon.  I predict that within five years, IOC will be recognized as the industry standard, even for companies that build software for their internal use only.  It will be difficult to get a job as a programmer without understanding this technology.  Don’t believe me?  Pretend you’re a software engineer with no object oriented knowledge.  Now search for a job and see what results come up.  Grim, isn’t it?

Where am I going with this?  I currently work for a company that builds medical software.  We have a huge collection of legacy code.  I’m too embarrassed to admit how large this beast is.  It just makes me cry.  Our company uses the latest tools and we have advanced developers who know how to build IOC containers, unit tests, properly scoped objects, etc.  We also practice XP, to a limited degree.  We do the SCRUMs, stand-ups, code-reviews (sometimes), demos, and sprint planning meetings.  What we don’t do is unit testing.  Oh we have unit tests, but the company mandate is that they are optional.  When there is extra time to build software, unit tests are incorporated.  Only a small hand-full of developers incorporate unit tests into their software development process.  Even I have built some new software without unit tests (and I’ve paid the price).

The bottom line is that unit tests are not part of the software construction process.  The company is staffed with programmers that are unfamiliar with TDD (Test Driven Development) and in fact, most are unfamiliar with unit testing altogether.  Every developer has heard of unit test, but I suspect that many are not sold on the concept of the unit test.  Many developers look at unit testing as just more work.  There are the usual arguments against unit testing: They need to be maintained, they become obsolete, they break when I refactor code, etc.  These are old arguments that were disproved years ago, but, like myths, they get perpetuated forever.

I’m going to divert my the subject a bit here, just to show how crazy this is.

Our senior developers have gathered in many meetings to discuss the agreed upon architecture that we are aiming for.  That architecture is not much different from any other company: Break our monolithic application into smaller APIs, use IOC containers, separate database concerns from business and business from the front-end.  We have a couple dozen APIs and they were written with this architecture in mind.  They are all written with IOC containers.  We use Autofac for our .Net applications and .Net Core has it’s own IOC container technology.  Some of these APIs have unit tests.  These tests were primarily added after the code was written, which is OK.  Some of our APIs have no unit tests.  This is not OK.

So the big question is: Why go through the effort of using an IOC container in the first place, if there is no plan for unit tests?

The answer is usually “to break dependencies.”  Which is correct, except, why?  Why did anybody care about breaking dependencies?  Just breaking dependencies gains nothing.  The IOC container itself, does not help with the maintainability of the code.  Is it safer to refactor code with an IOC container?  No.  Is it easier to troubleshoot and fix bugs in code that has dependencies broken?  Not unless your using unit tests.

My only conclusion to this crazy behavior is that developers don’t understand the actual purpose of unit testing.

Unit Test are Part of the Development Process

The most difficult part of creating unit tests is breaking dependencies.  IOC containers make that a snap.  Every object (with some exceptions) should be put into the container.  If an object instance must be created by another object, then it must be created inside the IOC container.  This will break the dependency for you.  Now unit testing is easy.  Just focus on one object at a time and write tests for that object.  If the object needs other objects to be injected, then us a mocking framework to make mock those objects.

As a programmer, you’ll need to go farther than this.  If you want to build code that can be maintained, you’ll need to build your unit tests first or at least concurrently.  You cannot run through like the Tasmanian devil, building your code, and then follow-up with a hand-full of unit tests.  You might think you’re clever by using a coverage tool to make sure you have full code coverage, but I’m going to show an example where code-coverage is not the only reason for unit testing.  Your workflow must change.  At first, it will slow you down, like learning a new language.  Keep working at it and eventually, you don’t have to think about the process.  You just know.

I can tell you from experience, that I don’t even think about how I’m going to build a unit test.  I just look at what I’ll need to test and I know what I need to do.  It took me years to get to this point, but I can say, hands down, that unit testing makes my workflow faster.  Why?  Because I don’t have to run the program in order to test for all the edge cases.  I write one unit test at a time and I run that test against my object.  I use unit testing as a harness for my objects.  That is the whole point of using an IOC container.  First, you take care of the dependencies, then you focus on one object at a time.

Example

I’m sure you’re riveted by my rambling prose, but I’m going to prove what I’m talking about.  At least on a small scale.  Maybe this will change your mind, maybe it won’t.

Let’s say for example, I was writing some sort of API that needed to return a set of patient records from the database.  One of the requirements is that the calling program can feed filter parameters to select a date range for the records desired.  There is a start date and an end date filter parameter.  Furthermore, each date parameter can be null.  If both are null, then give me all records.  If the start parameter is null, then give me up to the end date.  If the end date is null, then give me from the start date to the latest record.  The data in the database will return a date when the patient saw the doctor.  This is hypothetical, but based on a real program that my company uses.  I’m sure this scenario is used by any company that queries a database for web use, so I’m going to use it.

Let’s say the program is progressing like this:

public class PatienData
{
  private DataContext _context;

  public List<PatientVisit> GetData(int patientId, DateTime ? startDate, DateTime ? endDate)
  {
    var filterResults = _context.PatientVisits.Where(x => x.BetweenDates(startDate,endDate));

    return filterResults.ToList();
  }
}

You don’t want to include the date range logic in your LINQ query, so you create an extension method to handle that part.  Your next task is to write the ugly code called “BetweenDates()”.  This will be a static extension class that will be used with any of your PatientVisit POCOs.  If you’re unfamiliar with a POCO (Plain Old C# Code) object, then here’s a simple example:

public class PatientVisit
{
  public int PatientId { get; set; }
  public DateTime VisitDate { get; set; }
}

This is used by Entity Framework in a context.  If you’re still confused, please search through my blog for Entity Framework subjects and get acquainted with the technique.

Back to the “BetweenDates()” method.  Here’s the empty shell of what needs to be written:

public static class PatientVisitHelpers
{
  public static bool BetweenDates(this PatientVisit patientVisit, DateTime ? startDate, DateTime ? endDate)
  {
    
  }
}

Before you start to put logic into this method, start thinking about all the edge cases that you will be required to test.  If you run in like tribe of Comanche Indians and throw the logic into this method, you’ll be left with a manual testing job that will probably take you half a day (assuming you’re thorough).  Later, down the road, if someone discovers a bug, you will need to fix this method and then redo all the manual tests.

Here’s where unit test are going to make your job easy.  The unit tests are going to be part of the discovery process.  What Discovery?  One aspect of writing software that is different from any other engineering subject is that every project is new.  We don’t know what has to be built until we start to build it.  Then we “discover” aspects of the problem that we never anticipated before.  In this sample, I’ll show how that occurs.

Let’s list the rules:

  1. If the dates are both null, give me all records.
  2. If the first date is null, give me all records up to that date (including the date).
  3. If the last date is null, give me all records from the starting date (including the start date).
  4. If both dates exist, then give me all records, including the start and end dates.

According to this list, there should be at least four unit tests.  If you discover any edge cases, you’ll need a unit tests for each edge case.  If a bug is discovered, you’ll need to add a unit test to simulate the bug and then fix the bug.  Which tells you that you’ll keep adding unit tests to a project every time you fix a bug or add a feature (with the exception that one or more unit tests were incorrect in the first place).  An incorrect unit test usually occurs when you misinterpret the requirements.  In such an instance, you’ll fix the unit test and then fix your code.

Now that we have determined that we need five unit tests, create five empty unit test methods:

public class PatientVisitBetweenDates
{
  [Fact]
  public void BothDatesAreNull()
  {

  }
  [Fact]
  public void StartDateIsNull()
  {

  }
  [Fact]
  public void EndDateIsNull()
  {

  }
  [Fact]
  public void BothDatesPresent()
  {

  }
}

I have left out the IOC container code from my examples.  I am testing a static object that has no dependencies, therefore, it does not need to go into a container.  Once you have established an IOC container and you have broken dependencies on all objects, you can focus on your code just like the samples I am showing here.

Now for the next step: Write the unit tests.  You already have the method stubbed out.  So you can complete your unit tests first and then write the code to make the tests pass.  You can do one unit test, followed by writing code, then the next test, etc.  Another method is to write all the unit tests and then write the code to pass all tests.  I’m going to write all the unit tests first.  By now, you might have analyzed my empty unit tests and realized what I meant earlier by “discovery”.  If you haven’t, then this will be a good lesson.

For the first test, we’ll need the setup data.  We don’t have to concern ourselves with any of the Entity Framework code other than the POCO itself.  In fact, the “BetweenDates()” method only looks at one instance, or rather, one record.  If the date of the record will be returned with the set, then the method will return true.  Otherwise, it should return false.  The tiny scope of this method makes our unit testing easy.  So put one record of data in:

[Fact]
public void BothDatesAreNull()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
}

Next, setup the object and perform an assert.  This unit test should return a true for the data given because both the start date and the end date passed into our method will be null and we return all records.

[Fact]
public void BothDatesAreNull()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };

  var result = testSample.BetweenDates(null,null);
  Assert.True(result);
}

This test doesn’t reveal anything yet.  Technically, you can put code into your method that just returns true, and this test will pass.  At this point, it would be valid to do so.  Then you can write your next test and then refactor to return the correct value.  This would be the method used for pure Test Driven Development.  Only use the simplest code to make the test pass.  The code will be completed when all unit tests are completed and they all pass.

I’m going to go on to the next unit test, since I know that the first unit test is a trivial case.  Let’s use the same data we used on the last unit test:

[Fact]
public void StartDateIsNull()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
}

Did you “discover” anything yet?  If not, then go ahead and put the method setup in:

[Fact]
public void StartDateIsNull()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
  
  var result = testSample.BetweenDates(null, DateTime.Parse("1/8/2015"));
}

Now, you’re probably scratching your head because we need at least two test cases and probably three.  Here are the tests cases we need when the start date is null but the end date is filled in:

  1. Return true if the visit date is before the end date.
  2. Return false if the visit date is after the end date.

What if the date is equal to the end date?  Maybe we should test for that edge case as well.  Break the “StartDateIsNull()” unit test into three unit tests:

[Fact]
public void StartDateIsNullVisitDateIsBefore()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
  var result = testSample.BetweenDates(null, DateTime.Parse("1/3/2015"));
  Assert.True(result);
}
[Fact]
public void StartDateIsNullVisitDateIsAfter()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
  var result = testSample.BetweenDates(null, DateTime.Parse("1/8/2015"));
  Assert.False(result);
}
[Fact]
public void StartDateIsNullVisitDateIsEqual()
{
  var testSample = new PatientVisit
  {
    PatientId = 1,
    VisitDate = DateTime.Parse("1/7/2015")
  };
  var result = testSample.BetweenDates(null, DateTime.Parse("1/7/2015"));
  Assert.True(result);
}

Now you can begin to see the power of unit testing.  Would you have manually tested all three cases?  Maybe.

That also reveals that we will be required to expand the other two tests that contain dates.  The test case where we have a null end date will have a similar set of three unit tests and the in-between dates test will have more tests.  For the in-between, we now need:

  1. Visit date is less than start date.
  2. Visit date is greater than start date but less than end date.
  3. Visit date is greater than end date.
  4. Visit date is equal to start date.
  5. Visit date is equal to end date.
  6. Visit date is equal to both start and end date (start and end are equal).

That makes 6 unit test for the in-between case.  Bringing our total to 13 tests.

Fill in the code for the remaining tests.  When that is completed, verify each test to make sure they are all valid cases.  Once this is complete, you can write your code for the helper method.  You now have a complete detailed specification for your method written in unit tests.

Was that difficult?  Not really.  Most unit tests fall into this category.  Sometimes you’ll need to mock an object that your object under test depends on.  That is made easy by the IOC container.

Also, you can execute your code directly from the unit test.  Instead of writing a test program to send inputs to your API, or using your API in a full system where you are typing data in manually, you just execute the unit test you are working with.  You type in your code, then run all the unit tests for this method.  As you create code to account for each test case, you’ll see your unit tests start to turn green.  When all unit tests are green, you’re work is done.

Now, if Quality finds a bug that leads to this method, you can reverify your unit tests for the case that QA found.  You might discover a bug in code that is outside your method or it could have been a case missed by your unit tests.  Once you have fixed the bug, you can re-run the unit tests instead of manually testing each case.  In the long run, this will save you time.

Code Coverage

You should strive for 100% code coverage.  You’ll never get it, but the more code you can cover, the safer it will be to refactor code in the future.  Any code not covered by unit tests is at risk for failure when code is refactored.  As I mentioned earlier, code coverage doesn’t solve all your problems.  In fact, if I wrote the helper code for the previous example and then I created unit tests afterwards, I bet I can create two or three unit tests that covers 100% of the code in the helper method.  What I might not cover are edge cases, like the visit date equal to the start date.  It’s best to use code coverage tools after the code and unit tests are written.  The code coverge will be your verfication that you didn’t miss something.

Another problem with code coverage tools is that it can make you lazy.  You can easily look at the code and then come up with a unit test that executes the code inside an “if” statement and then create a unit test to execute code inside the “else” part.  The unit tests might not be valid.  You need to understand the purpose of the “if” and “else” and the purpose of the code itself.  Keep that in mind.  If you are writing new code, create the unit test first or concurrently.  Only use the code coverage tool after all your tests pass to verify you covered all of your code.

Back to the 20,000 foot View

Let’s take a step back and talk about what the purpose of the exercise was.  If you’re a hold-out for a world of programming without unit tests, then you’re still skeptical of what was gained by performing all that extra work.  There is extra code.  It took time to write that code.  Now there are thirteen extra methods that must be maintained going forward.

Let’s pretend this code was written five years ago and it’s been humming along for quite some time without any bugs being detected.  Now some entry-level developer comes on the scene and he/she decides to modify this code.  Maybe the developer in question thinks that tweaking this method is an easy short-cut to creating some enhancement that was demanded by the customer.  If the code is changed and none of the unit tests break, then we’re OK.  If the code is changed and one or more unit tests breaks, then the programmer modifying the code must look at those unit tests and determine if the individual behavoirs should be changed, or maybe those were broken because the change is not correct.  If the unit tests don’t exist, the programmer modifying the code has no idea what thought process and/or testing went into the original design.  The programmer probably doesn’t know the full specification of the code when it was written.  The suite of unit tests make the purpose unambiguous.  Any programmer can look at the unit tests and see exactly what the specification is for the method under test.

What if a bug is found and all unit tests pass?  What you have discovered is an edge case that was not known at the time the method was written.  Before fixing the bug, the programmer must create a unit test with the edge case that causes the bug.  That unit test must fail with the current code and it should fail in the same manner as the real bug.  Once the failing unit test is created, then the bug should be fixed to make the unit test pass.  Once that has been accomplished, run all unit tests and make sure you didn’t break prevous features when fixing the bug.  This method ends the whack-a-mole technique of trying to fix bugs in software.

Next, try to visualize a future where all your business code is covered by unit tests.  If each class and method had a level of unit testing to the specification that this mehod has, it would be safe and easy to refactor code.  Any refactor that breaks code down the line will show up in the unit tests (as broken tests).  Adding enhancements would be easy and quick.  You would be virtually guarenteed to produce a quality product after adding an enhancement.  That’s because you are designing the software to be maintainable.

Not all code can be covered by unit tests.  In my view, this is a shame.  Unfortunately, there are sections of code that cannot be put into a unit test for some reason or another.  With an IOC container, your projects should be divided into projects that are unit testable and projects that are not.  Projects, such as the project containiner your Entity Framework repository, are not unit testable.  That’s OK, and you should limit how much actual code exists in this project.  All the code should be POCO’s and some connecting code.  Your web interface should be limited to code that connects the outside world to your business classes.  Any code that is outside the realm of unit testing is going to be difficult to test.  Try to limit the complexity of this code.

Finally…

I have looked over the shoulder of students building software for school projects at the University of Maryland and I noticed that they incorporated unit testing into a Java project.  That made me smile.  While the project did not contain an IOC container, it’s a step in the right direction.  Hopefully, withing the next few years, universities will begin to produce students that understand that unit tests are necessary.  There is still a large gap between those students and those in the industry that have never used unit tests.  That gap must be filled in a self-taught manner.  If you are one of the many who don’t incorporate unit testing into your software development process, then you better start doing it.  Now is the time to learn and get good at it.  If you wait too long, you’ll be one of those Cobol developers that wondered who moved their cheese.

 

XML Serialization

Summary

In this post I’m going to demonstrate the proper way to serialize XML and setup unit tests using xUnit and .Net Core.  I will also be using Visual Studio 2017.

Generating XML

JSON is rapidly taking over as the data encoding standard of choice.  Unfortunately, government agencies are decades behind the technology curve and XML is going to be around for a long time to come.  One of the largest industries industries still using XML for a majority of their data transfer encoding is the medical industry.  Documents required by meaningful use are mostly encoded in XML.  I’m not going to jump into the gory details of generating a CCD.  Instead, I’m going to keep this really simple.

First, I’m going to show a method of generating XML that I’ve seen many times.  Usually coded by a programmer with little or no formal education in Computer Science.  Sometimes programmers just take a short-cut because it appears to be the simplest way to get the product out the door.  So I’ll show the technique and then I’ll explain why it turns out that this is a very poor way of designing an XML generator.

Let’s say for instance we wanted to generate XML representing a house.  First we’ll define the house as a record that can contain square footage.  That will be the only data point assigned to the house record (I mentioned this was going to be simple right).  Inside of the house record will be lists of walls and lists of roofs (assume a house could have two or more roofs like a tri-level configuration).  Next, I’m going to make a list of windows for the walls.  The window block will have a “Type” that is a free-form string input and the roof block will also have a “Type” that is a free-form string.  That is the whole definition.

public class House
{
  public List Walls = new List();
  public List Roofs = new List();
  public int Size { get; set; }
}

public class Wall
{
  public List Windows { get; set; }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

The “easy” way to create XML from this is to use the StringBuilder and just build XML tags around the data in your structure.  Here’s a sample of the possible code that a programmer might use:

public class House
{
  public List<Wall> Walls = new List<Wall>();
  public List<Roof> Roofs = new List<Roof>();
  public int Size { get; set; }

  public string Serialize()
  {
    var @out = new StringBuilder();

    @out.Append("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
    @out.Append("<House xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\">");

    foreach (var wall in Walls)
    {
      wall.Serialize(ref @out);
    }

    foreach (var roof in Roofs)
    {
      roof.Serialize(ref @out);
    }

    @out.Append("<size>");
    @out.Append(Size);
    @out.Append("</size>");

    @out.Append("</House>");

    return @out.ToString();
  }
}

public class Wall
{
  public List<Window> Windows { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    if (Windows == null || Windows.Count == 0)
    {
      @out.Append("<wall />");
      return;
    }

    @out.Append("<wall>");
    foreach (var window in Windows)
    {
      window.Serialize(ref @out);
    }
    @out.Append("</wall>");
  }
}

public class Window
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<window>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</window>");
  }
}

public class Roof
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<roof>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</roof>");
  }
}

The example I’ve given is a rather clean example.  I have seen XML generated with much uglier code.  This is the manual method of serializing XML.  One almost obvious weakness is that the output produced is a straight line of XML, which is not human-readable.  In order to allow human readable XML output to be produced with an on/off switch, extra logic will need to be incorporated that would append the newline and add tabs for indents.  Another problem with this method is that it contains a lot of code that is unnecessary.  One typo and the XML is incorrect.  Future editing is hazardous because tags might not match up if code is inserted in the middle and care is not taken to test such conditions.  Unit testing something like this is an absolute must.

The easy method is to use the XML serializer.  To produce the correct output, it is sometimes necessary to add attributes to properties in objects to be serialized.  Here is the object definition that produces the same output:

public class House
{
  [XmlElement(ElementName = "wall")]
  public List Walls = new List();

  [XmlElement(ElementName = "roof")]
  public List Roofs = new List();

  [XmlElement(ElementName = "size")]
  public int Size { get; set; }
}

public class Wall
{
  [XmlElement(ElementName = "window")]
  public List Windows { get; set; }

  public bool ShouldSerializenullable()
  {
    return Windows == null;
  }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

In order to serialize the above objects into XML, you use the XMLSerializer object:

public static class CreateXMLData
{
  public static string Serialize(this House house)
  {
    var xmlSerializer = new XmlSerializer(typeof(House));

    var settings = new XmlWriterSettings
    {
      NewLineHandling = NewLineHandling.Entitize,
      IndentChars = "\t",
      Indent = true
    };

    using (var stringWriter = new Utf8StringWriter())
    {
      var writer = XmlWriter.Create(stringWriter, settings);
      xmlSerializer.Serialize(writer, house);

      return stringWriter.GetStringBuilder().ToString();
    }
  }
}

You’ll also need to create a Utf8StringWriter Class:

public class Utf8StringWriter : StringWriter
{
  public override Encoding Encoding
  {
    get { return Encoding.UTF8; }
  }
}

Unit Testing

I would recommend unit testing each section of your XML.  Test with sections empty as well as containing one or more items.  You want to make sure you capture instances of null lists or empty items that should not generate XML output.  If there are any special attributes, make sure that the XML generated matches the specification.  For my unit testing, I stripped newlines and tabs to compare with a sample XML file that is stored in my unit test project.  As a first-attempt, I created a helper for my unit tests:

public static class XmlResultCompare
{
  public static string ReadExpectedXml(string expectedDataFile)
  {
    var assembly = Assembly.GetExecutingAssembly();
    using (var stream = assembly.GetManifestResourceStream(expectedDataFile))
    {
      using (var reader = new StreamReader(stream))
      {
        return reader.ReadToEnd().RemoveWhiteSpace();
      }
    }
  }

  public static string RemoveWhiteSpace(this string s)
  {
    s = s.Replace("\t", "");
    s = s.Replace("\r", "");
    s = s.Replace("\n", "");
  return s;
  }
}

If you look carefully, I ‘m compiling my xml test data right into the unit test dll.  Why am I doing that?  The company that I work for as well as most serious companies use continuous integration tools such as a build server.  The problem with a build server is that your files might not make it to the same directory location on the build server that they are on your PC.  To ensure that the test files are there, compile them into the dll and reference them from the namespace using Assembly.GetExecutingAssembly().  To make this work, you’ll have to mark your xml test files as an Embedded Resource (click on the xml file and change the Build Action property to Embedded Resource).  To access the files, which are contained in a virtual directory called “TestData”, you’ll need to use the name space, the virtual directory and the full file name:

XMLCreatorTests.TestData.XMLHouseOneWallOneWindow.xml

Now for a sample unit test:

[Fact]
public void TestOneWallNoWindow()
{
  // one wall, no windows
  var house = new House { Size = 2000 };
  house.Walls.Add(new Wall());

  Assert.Equal(XmlResultCompare.ReadExpectedXml("XMLCreatorTests.TestData.XMLHouseOneWallNoWindow.xml"), house.Serialize().RemoveWhiteSpace());
}

Notice how I filled in the house object with the size and added one wall.  The ReadExpectedXml() method will remove whitespaces automatically, so it’s important to remove them off the serialized version of house in order to match.

Where to Get the Code

As always you can go to my GitHub account and download the sample application (click here).  I would recommend downloading the application and modifying it as a test to see how all the piece work.  Add a unit test to see if you can match your expected xml with the xml serializer.

 

 

 

The Case for Unit Tests

Introduction

I’ve written a lot of posts on how to unit test, break dependencies, mocking objects, creating fakes, dependency injection and IOC containers.  I am a huge advocate of writing unit tests.  Unit tests are not the solution to everything, but they do solve a large number of problems that occur in software that is not unit tested.  In this post, I’m going to build a case for unit testing.

Purpose of Unit Tests

First, I’m going to assume that the person reading this post is not sold on the idea of unit tests.  So let me start by defining what a unit test is and what is not a unit test.  Then I’ll move on to defining the process of unit testing and how unit tests can save developers a lot of time.

A unit test is a tiny, simple test on a method or logic element in your software.  The goal is to create a test for each logical purpose that your code performs.  For a given “feature” you might have a hundred unit tests (more or less, depending on how complex the feature is).  For a method, you could have one, a dozen or hundreds of unit tests.  You’ll need to make sure you can cover different cases that can occur for the inputs to your methods and test for the appropriate outputs.  Here’s a list of what you should unit test:

  • Fence-post inputs.
  • Obtain full code coverage.
  • Nullable inputs.
  • Zero or empty string inputs.
  • Illegal inputs.
  • Representative set of legal inputs.

Let me explain what all of this means.  Fence-post inputs are dependent on the input data type.  If you are expecting an integer, what happens when you input a zero?  What about the maximum possible integer (int.MaxValue)?  What about minimum integer (int.MinValue)?

Obtain full coverage means that you want to make sure you hit all the code that is inside your “if” statements as well as the “else” portion.  Here’s an example of a method:

public class MyClass
{
    public int MyMethod(int input1)
    {
        if (input1 == 0)
        {
            return 4;
        }
        else if (input1 > 0)
        {
            return 2;
        }
        return input1;
    }
}

How many unit tests would you need to cover all the code in this method?  You would need three:

  1. Test with input1 = 0, that will cover the code up to the “return 4;”
  2. Test with input = 1 or greater, that will cover the code to “return 2;”
  3. Test with input = -1 or less, that will cover the final “return input1;” line of code.

That will get you full coverage.  In addition to those three tests, you should account for min and max int values.  This is a trivial example, so min and max tests are overkill.  For larger code you might want to make sure that someone doesn’t break your code by changing the input data type.  Anyone changing the data type from int to something else would get failed unit tests that will indicate that they need to review the code changes they are performing and either fix the code or update the unit tests to provide coverage for the redefined input type.

Nullable data can be a real problem.  Many programmers don’t account for all null inputs.  If you are using an input type that can have null data, then you need to account for what will happen to your code when it receives that input type.

The number zero can have bad consequences.  If someone adds code and the input is in the denominator, then you’ll get a divide by zero error, and you should catch that problem before your code crashes.  Even if you are not performing a divide, you should probably test for zero, to protect a future programmer from adding code to divide and cause an error.  You don’t necessarily have to provide code in your method to handle zero.  The example above just returns the number 4.  But, if you setup a unit test with a zero for an input, and you know what to expect as your output, then that will suffice.  Any future programmer that adds a divide with that integer and doesn’t catch the zero will get a nasty surprise when they execute the unit tests.

If your method allows input data types like “string”, then you should check for illegal characters.  Does your method handle carriage returns?  Unprintable characters?  What about an empty string?  Strings can be null as well.

Don’t forget to test for your legal data.  The three tests in the previous example test for three different legal inputs.

Fixing Bugs

The process of creating unit tests should occur as you are creating objects.  In fact, you should constantly think in terms of how you’re going to unit test your object, before you start writing your object.  Creating software is a lot like a sausage factory and even I write objects before unit tests as well as the other way around.  I prefer to create an empty object and some proposed methods that I’ll be creating.  Just a small shell with maybe one or two methods that I want to start with.  Then I’ll think up unit tests that I’ll need ahead of time.  Then I add some code and that might trigger a thought for another unit test.  The unit tests go with the code that you are writing and it’s much easier to write the unit tests before or just after you create a small piece of code.  That’s because the code you just created is fresh in your mind and you know what it’s supposed to do.

Now you have a monster that was created over several sprints.  Thousands of lines of code and four hundred unit tests.  You deploy your code to a Quality environment and a QA person discovers a bug.  Something you would have never thought about, but it’s an easy fix.  Yeah, it was something stupid, and the fix will take about two seconds and you’re done!

Not so fast!  If you find a bug, create a unit test first.  Make sure the unit test triggers the bug.  If this is something that blew up one of your objects, then you need to create one or more unit tests that feeds the same input into your object and forces it to blow up.  Then fix the bug.  The unit test(s) should pass.

Now why did we bother?  If you’re a seasoned developer like me, there have been numerous times that another developer unfixes your bug fix.  It happens so often, that I’m never surprised when it does happen.  Maybe your fix caused an issue that was unreported.  Another developer secretly fixes your bug by undoing your fix, not realizing that they are unfixing a bug.  If you put a unit test in to account for a bug, then a developer that unfixes the bug will get an error from your unit test.  If your unit test is named descriptively, then that developer will realize that he/she is doing something wrong.  This episode just performed a regression test on your object.

Building Unit Tests is Hard!

At first unit tests are difficult to build.  The problem with unit testing has more to do with object dependency than with the idea of unit testing.  First, you need to learn how to write code that isn’t tightly coupled.  You can do this by using an IOC container.  In fact, if you’re not using an IOC container, then you’re just writing legacy code.  Somewhere down the line, some poor developer is going to have to “fix” your code so that they can create unit tests.

The next most difficult concept to overcome is learning how to mock or fake an object that is not being unit tested.  These can be devices, like database access, file I/O, smtp drivers, etc.  For devices, learn how to use interfaces and wrappers.  Then you can use Moq to mock your unit tests.

Unit Tests are Small

You need to be conscious of what you are unit testing.  Don’t create a unit test that checks a whole string of objects at once (unless you want to consider those as integration tests).  Limit your unit tests to the smallest amount of code you need in order to test your functionality.  No need to be fancy.  Just simple.  Your unit tests should run fast.  Many slow running unit tests bring no benefit to the quality of your product.  Developers will avoid running unit tests if it takes 10 minutes to run them all.  If your unit tests are taking too long to run, you’ll need to analyze what should be scaled back.  Maybe your program is too large and should be broken into smaller pieces (like APIs).

There are other reasons to keep your unit tests small and simple: Some day one or more unit tests are going to fail.  The developer modifying code will need to look at the failing unit test and analyze what it is testing.  The quicker a developer can analyze and determine what is being tested, the quicker he/she can fix the bug that was caused, or update the unit test for the new functionality.  A philosophy of keeping code small should translate into your entire programming work pattern.  Keep your methods small as well.  That will keep your code from being nested too deep.  Make sure your methods server a single purpose.  That will make unit testing easier.

A unit test only tests methods of one object.  The only time you’ll break other objects is if you add parameters to your object or public methods/parameters.  If you change something to a private method, only unit tests for the object you’re working on will fail.

Run Unit Tests Often

For a continuous integration environment, your unit tests should run right after you build.  If you have a build serer (and you should), your build server must run the unit tests.  If your tests do not pass, then the build needs to be marked as broken.  If you only run your unit tests after you end your sprint, then you’re going to be in for a nasty surprise when hundreds of unit tests fail and you need to spend days trying to fix all the problems.  Your programming pattern should be: Type some code, build, test, repeat.  If you test after each build, then you’ll catch mistakes as you make them.  Your failing unit tests will be minimal and you can fix your problem while you are focused on the logic that caused the failure.

Learning to Unit Test

There are a lot of resources on the Internet for the subject of unit testing.  I have written many blog posts on the subject that you can study by clicking on the following links:

 

Mocking Your File System

Introduction

In this post, I’m going to talk about basic dependency injection and mocking a method that is used to access hardware.  The method I’ll be mocking is the System.IO.Directory.Exists().

Mocking Methods

One of the biggest headaches with unit testing is that you have to make sure you mock any objects that your method under test is calling.  Otherwise your test results could be dependent on something you’re not really testing.  As an example for this blog post, I will show how to apply unit tests to this very simple program:

class Program
{
    static void Main(string[] args)
    {
        var myObject = new MyClass();
        Console.WriteLine(myObject.MyMethod());
        Console.ReadKey();
    }
}

The object that is used above is:

public class MyClass
{
    public int MyMethod()
    {
        if (System.IO.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

Now, we want to create two unit tests to cover all the code in the MyMethod() method.  Here’s an attempt at one unit test:

[TestMethod]
public void test_temp_directory_exists()
{
    var myObject = new MyClass();
    Assert.AreEqual(3, myObject.MyMethod());
}

The problem with this unit test is that it will pass if your computer contains the c:\temp directory.  If your computer doesn’t contain c:\temp, then it will always fail.  If you’re using a continuous integration environment, you can’t control if the directory exists or not.  To compound the problem you really need test both possibilities to get full test coverage of your method.  Adding a unit test to cover the case where c:\temp to your test suite would guarantee that one test would pass and the other fail.

The newcomer to unit testing might think: “I could just add code to my unit tests to create or delete that directory before the test runs!”  Except, that would be a unit test that modifies your machine.  The behavior would destroy anything you have in your c:\temp directory if you happen to use that directory for something.  Unit tests should not modify anything outside the unit test itself.  A unit test should never modify database data.  A unit test should not modify files on your system.  You should avoid creating physical files if possible, even temp files because temp file usage will make your unit tests slower.

Unfortunately, you can’t just mock System.IO.Directory.Exists().  The way to get around this is to create a wrapper object, then inject the object into MyClass and then you can use Moq to mock your wrapper object to be used for unit testing only.  Your program will not change, it will still call MyClass as before.  Here’s the wrapper object and an interface to go with it:

public class FileSystem : IFileSystem
{
  public bool DirectoryExists(string directoryName)
  {
    return System.IO.Directory.Exists(directoryName);
  }
}

public interface IFileSystem
{
    bool DirectoryExists(string directoryName);
}

Your next step is to provide an injection point into your existing class (MyClass).  You can do this by creating two constructors, the default constructor that initializes this object for use by your method and a constructor that expects a parameter of IFileSystem.  The constructor with the IFileSystem parameter will only be used by your unit test.  That is where you will pass along a mocked version of your filesystem object with known return values.  Here are the modifications to the MyClass object:

public class MyClass
{
    private readonly IFileSystem _fileSystem;

    public MyClass(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public MyClass()
    {
        _fileSystem = new FileSystem();
    }

    public int MyMethod()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

This is the point where your program should operate as normal.  Notice how I did not need to modify the original call to MyClass that occurred at the “Main()” of the program.  The MyClass() object will create a IFileSystem wrapper instance and use that object instead of calling System.IO.Directory.Exists().  The result will be the same.  The difference is that now, you can create two unit tests with mocked versions of IFileSystem in order to test both possible outcomes of the existence of “c:\temp”.  Here is an example of the two unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(3, myObject.MyMethod());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(5, myObject.MyMethod());
}

Make sure you include the NuGet package for Moq.  You’ll notice that in the first unit test, we’re testing MyClass with a mocked up version of a system where “c:\temp” exists.  In the second unit test, the mock returns false for the directory exists check.

One thing to note: You must provide a matching input on x.DirectoryExists() in the mock setup.  If it doesn’t match what is used in the method, then you will not get the results you expect.  In this example, the directory being checked is hard-coded in the method and we know that it is “c:\temp”, so that’s how I mocked it.  If there is a parameter that is passed into the method, then you can mock some test value, and pass the same test value into your method to make sure it matches (the actual test parameter doesn’t matter for the unit test, only the results).

Using an IOC Container

This sample is setup to be extremely simple.  I’m assuming that you have existing .Net legacy code and you’re attempting to add unit tests to the code.  Normally, legacy code is hopelessly un-unit testable.  In other words, it’s usually not worth the effort to apply unit tests because of the tightly coupled nature of legacy code.  There are situations where legacy code is not too difficult to add unit testing.  This can occur if the code is relatively new and the developer(s) took some care in how they built the code.  If you are building new code, you can use this same technique from the beginning, but you should also plan your entire project to use an IOC container.  I would not recommend refactoring an existing project to use an IOC container.  That is a level of madness that I have attempted more than once with many man-hours of wasted time trying to figure out what is wrong with the scoping of my objects.

If your code is relatively new and you have refactored to use contructors as your injection points, you might be able to adapt to an IOC container.  If you are building your code from the ground up, you need to use an IOC container.  Do it now and save yourself the headache of trying to figure out how to inject objects three levels deep.  What am I talking about?  Here’s an example of a program that is tightly coupled:

class Program
{
    static void Main(string[] args)
    {
        var myRootClass = new MyRootClass();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}
public class MyRootClass
{
  readonly ChildClass _childClass = new ChildClass();

  public bool CountExceeded()
  {
    if (_childClass.TotalNumbers() > 5)
    {
        return true;
    }
    return false;
  }

  public void Increment()
  {
    _childClass.IncrementIfTempDirectoryExists();
  }
}

public class ChildClass
{
    private int _myNumber;

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (System.IO.Directory.Exists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

The example code above is very typical legacy code.  The “Main()” calls the first object called “MyRootClass()”, then that object calls a child class that uses System.IO.Directory.Exists().  You can use the previous example to unit test the ChildClass for examples when c:\temp exist and when it doesn’t exist.  When you start to unit test MyRootClass, there’s a nasty surprise.  How to you inject your directory wrapper into that class?  If you have to inject class wrappers and mocked classes of every child class of a class, the constructor of a class could become incredibly large.  This is where IOC containers come to the rescue.

As I’ve explained in other blog posts, an IOC container is like a dictionary of your objects.  When you create your objects, you must create a matching interface for the object.  The index of the IOC dictionary is the interface name that represents your object.  Then you only call other objects using the interface as your data type and ask the IOC container for the object that is in the dictionary.  I’m going to make up a simple IOC container object just for demonstration purposes.  Do not use this for your code, use something like AutoFac for your IOC container.  This sample is just to show the concept of how it all works.  Here’s the container object:

public class IOCContainer
{
  private static readonly Dictionary<string,object> ClassList = new Dictionary<string, object>();
  private static IOCContainer _instance;

  public static IOCContainer Instance => _instance ?? (_instance = new IOCContainer());

  public void AddObject<T>(string interfaceName, T theObject)
  {
    ClassList.Add(interfaceName,theObject);
  }

  public object GetObject(string interfaceName)
  {
    return ClassList[interfaceName];
  }

  public void Clear()
  {
    ClassList.Clear();
  }
}

This object is a singleton object (global object) so that it can be used by any object in your project/solution.  Basically it’s a container that holds all pointers to your object instances.  This is a very simple example, so I’m going to ignore scoping for now.  I’m going to assume that all your objects contain no special dependent initialization code.  In a real-world example, you’ll have to analyze what is initialized when your objects are created and determine how to setup the scoping in the IOC container.  AutoFac has options of when the object will be created.  This example creates all the objects before the program starts to execute.  There are many reasons why you might not want to create an object until it’s actually used.  Keep that in mind when you are looking at this simple example program.

In order to use the above container, we’ll need to use the same FileSystem object and interface from the prevous program.  Then create an interface for MyRootObject and ChildObject.  Next, you’ll need to go through your program and find every location where an object is instantiated (look for the “new” command).  Replace those instances like this:

public class ChildClass : IChildClass
{
    private int _myNumber;
    private readonly IFileSystem _fileSystem = (IFileSystem)IOCContainer.Instance.GetObject("IFileSystem");

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

Instead of creating a new instance of FileSystem, you’ll ask the IOC container to give you the instance that was created for the interface called IFileSystem.  Notice how there is no injection in this object.  AutoFac and other IOC containers have facilities to perform constructor injection automatically.  I don’t want to introduce that level of complexity in this example, so for now I’ll just pretend that we need to go to the IOC container object directly for the main program as well as the unit tests.  You should be able to see the pattern from this example.

Once all your classes are updated to use the IOC container, you’ll need to change your “Main()” to setup the container.  I changed the Main() method like this:

static void Main(string[] args)
{
    ContainerSetup();

    var myRootClass = (IMyRootClass)IOCContainer.Instance.GetObject("IMyRootClass");
    myRootClass.Increment();

    Console.WriteLine(myRootClass.CountExceeded());
    Console.ReadKey();
}

private static void ContainerSetup()
{
    IOCContainer.Instance.AddObject<IChildClass>("IChildClass",new ChildClass());
    IOCContainer.Instance.AddObject<IMyRootClass>("IMyRootClass",new MyRootClass());
    IOCContainer.Instance.AddObject<IFileSystem>("IFileSystem", new FileSystem());
}

Technically the MyRootClass object does not need to be included in the IOC container since no other object is dependent on it.  I included it to demonstrate that all objects should be inserted into the IOC container and referenced from the instance in the container.  This is the design pattern used by IOC containers.  Now we can write the following unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(true,myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

In these unit tests, we put the mocked up object used by the object under test into the IOC container.  I have provided a “Clear()” method to reset the IOC container for the next test.  When you use AutoFac or other IOC containers, you will not need the container object in your unit tests.  That’s because IOC containers like the one built into .Net Core and AutoFac use the constructor of the object to perform injection automatically.  That makes your unit tests easier because you just use the constructor to inject your mocked up object and test your object.  Your program uses the IOC container to magically inject the correct object according to the interface used by your constructor.

Using AutoFac

Take the previous example and create a new constructor for each class and pass the interface as a parameter into the object like this:

private readonly IFileSystem _fileSystem;

public ChildClass(IFileSystem fileSystem)
{
    _fileSystem = fileSystem;
}

Instead of asking the IOC container for the object that matches the interface IFileSystem, I have only setup the object to expect the fileSystem object to be passed in as a parameter to the class constructor.  Make this change for each class in your project.  Next, change your main program to include AutoFac (NuGet package) and refactor your IOC container setup to look like this:

static void Main(string[] args)
{
    IOCContainer.Setup();

    using (var myLifetime = IOCContainer.Container.BeginLifetimeScope())
    {
        var myRootClass = myLifetime.Resolve<IMyRootClass>();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}

public static class IOCContainer
{
    public static IContainer Container { get; set; }

    public static void Setup()
    {
        var builder = new ContainerBuilder();

        builder.Register(x => new FileSystem())
            .As<IFileSystem>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new ChildClass(x.Resolve<IFileSystem>()))
            .As<IChildClass>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new MyRootClass(x.Resolve<IChildClass>()))
            .As<IMyRootClass>()
            .PropertiesAutowired()
            .SingleInstance();

        Container = builder.Build();
    }
}

I have ordered the builder.Register command from innner most to the outer most object classes.  This is not really necessary since the resolve will not occur until the IOC container is called by the object to be used.  In other words, you can define the MyRootClass first, followed by FileSystem and ChildClass, or in any order you want.  The Register command is just storing your definition of which physical object will be represented by each interface and which dependencies it will depend on.

Now you can cleanup your unit tests to look like this:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(true, myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

Do not include the AutoFac NuGet package in your unit test project.  It’s not needed.  Each object is isolated from all other objects.  You will still need to mock any injected objects, but the injection occurs at the constructor of each object.  All dependencies have been isolated so you can unit test with ease.

Where to Get the Code

As always, I have posted the sample code up on my GitHub account.  This project contains four different sample projects.  I would encourage you to download each sample and experiment/practice with them.  You can download the samples by following the links listed here:

  1. MockingFileSystem
  2. TightlyCoupledExample
  3. SimpleIOCContainer
  4. AutoFacIOCContainer
 

Dot Net Core Using the IOC Container

I’ve talked about Inversion Of Control in previous posts, but I’m going to go over it again.  If you’re new to IOC containers, breaking dependencies and unit testing, then this is the blog post you’ll want to read.  So let’s get started…

Basic Concept of Unit Testing

Developing and maintaining software is one of the most complex tasks ever performed by humans.  Software can grow to proportions that cannot be understood by any one person at a time.  To compound the issue of maintaining and enhancing code, there is the problem that one small change in code can affect the operation of something that seems unrelated.  Engineers that build something physical, like say a jumbo jet can identify a problem and fix it.  They usually don’t expect a problem with the wing to affect the passenger seats.  In software, all bets are off.  So there needs to be a way to test everything when a small change is made.

The reason you want to create a unit test is to put in place a tiny automatic regression test.  This test is executed every time you change code to add an enhancement.  If you change some code, the test runs and ensures that you didn’t break a feature that you already coded and tested previously.  Each time you add one feature, you add a unit test.  Eventually, you end up with a collection of unit tests covering each combination of features used by your software.  These tests ride along with your source code forever.  Ideally, you want to always regression test every piece of logic that you’ve written.  In theory this will prevent you from breaking existing code when you add a new enhancement.

To ensure that you are unit testing properly, you need to understand coverage.  Coverage is not everything, but it’s a measurement of how much of your code is covered by your unit tests and you should strive to maximize this.  There are tools that can measure this for you, though some are expensive.  One aspect of coverage that you need to be aware of is the combination “if” statement:

if (input == 'A' || input =='B')
{
    // do something
}

This is a really simple example, but your unit test suite might contain a test that feeds the character A into the input and you’ll get coverage for the inner part of the if statement.  However, you have not tested when the input is B and that input might be used by other logic in a slightly different way.  Technically, we don’t have 100% coverage.  I just want you to be aware that this issue exists and you might need to do some analysis of your code coverage when you’re creating unit tests.

One more thing about unit tests and this is very important to keep in mind.  When you deploy this software and bugs are reported, you will need to add a unit test for each bug reported.  The unit test must break your code exactly the way the bug did.  Then you fix the bug and that prevents any other developer from undoing your bug fix.  Of course, your bug fix will be followed by another unit test suite run to make sure you didn’t break any thing else.  This will help you make forward progress in your quest for bug-free or low-bug software.

Dependencies

So you’ve learned the basics of unit test writing and you’re creating objects and and putting one or more unit tests on each method.  Suddenly you run into an issue.  Your object connects to a device for input.  An example is that you read from a text file or you connect to a database to read and write data.  Your unit test should never cause files to be written or data to be written to a real database.  It’s slow, the data being written would need to be cleaned out when the test completed.  What if the tests fail?  Your test data might still be in the database.  Even if you setup a test database, you would not be able to run two versions of your unit tests at the same time (think of two developers executing their local copy of the unit test suite).

The device being used is called a dependency.  The object depends on the device and it cannot operate properly without the device.  To get around dependencies, we need to create a fake or mock database or a fake file I/O object to put in place of the real database or file I/O when we run our unit tests.  The problem is that we need to somehow tell the object under test to use the fake or mock instead of the real thing.  The object must also default to the real database or file I/O when not under test.

The current trend in breaking dependencies involves a technique called Inversion Of Control or IOC.  What IOC does is allow us to define all object create points at program startup time.  When unit tests are run, we substitute the objects that perform database and I/O functions with fakes.  Then we call our objects under test and the IOC system takes care of wiring the correct dependencies together.  Sounds easy.

IOC Container Basics

Here are the basics of how an IOC container works.  I’m going to cut out all the complications involved and keep this super simple.

First, there’s the container.  This is a dictionary of interfaces and classes that is used as a lookup.  Basically, you create your object and then you create a matching interface for your object.  When you call one object from another, you use the interface to lookup which class to call from your object.  Here’s a diagram of object A dependent on object B:

Here’s a tiny code sample:

public class A
{
  public void MyMethod()
  {
    var b = new B();

    b.DependentMethod();
    }
}

public class B
{
  public void DependentMethod()
  {
    // do something here
  }
}

As you can see, class B is created inside class A.  To break the dependency we need to create an interface for each class and add them to the container:

public interface IB
{
  void DependentMethod();
}

public interface IA
{
  void MyMethod();
}

Inside Program.cs:

var serviceProvider = new ServiceCollection()
  .AddSingleton<IB, B>()
  .AddSingleton<IA, A>()
  .BuildServiceProvider();

var a = serviceProvider.GetService<IA>();
a.MyMethod();

Then modify the existing objects to use the interfaces and provide for the injection of B into object A:

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public void MyMethod()
  {
    _b.DependentMethod();
  }
}

public class B : IB
{
  public void DependentMethod()
  {
    // do something here
  }
}

The service collection object is where all the magic occurs.  This object is filled with definitions of which interface will be matched with which class.  As you can see by the insides of class A, there is no more reference to the class B anywhere.  Only the interface is used to reference any object that is passed (called injected) into the constructor that conforms to IB (interface B).  The service collection will lookup IB and see that it needs to create an instance of B and pass that along.  When the MyMethod() is executed in A, it just calls the _b.DependendMethod() method without worrying about the actual instance of _b.  What does that do for us when we are unit testing?  Plenty.

Mocking an Object

Now I’m going to use a NuGet package called Moq.  This framework is exactly what we need because it can take an interface and create a fake object that we can apply simulated outputs to.  First, lets modify our A and B class methods to return some values:

public class B : IB
{
  public int DependentMethod()
  {
    return 5;
  }
}

public interface IB
{
  int DependentMethod();
}

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public int MyMethod()
  {
    return _b.DependentMethod();
  }
}

public interface IA
{
  int MyMethod();
}

I have purposely kept this so simple that there’s nothing being done.  As you can see, DependentMethod() just returns the number 5 in real life.  Your methods might perform a calculation and return the result, or you might have a random number generator or it’s a value read from your database.  This example just returns 5 and we don’t care about that because our mock object will return any value we want for the unit test being written.

Now the unit test using Moq looks like this:

[Fact]
public void ClassATest1()
{
    var mockedB = new Mock<IB>();
    mockedB.Setup(b => b.DependentMethod()).Returns(3);

    var a = new A(mockedB.Object);

    Assert.Equal(3, a.MyMethod());
}

The first line of the test creates a mock of object B called “mockedB”.  The next line creates a fake return for any call to the DependentMethod() method.  Next, we create an instance of class A (the real class) and inject the mocked B object into it.  We’re not using the container for the unit test because we don’t need to.  Technically, we could create a container and put the mocked B object into one of the service collection items, but this is simpler.  Keep your unit tests as simple as possible.

Now that there is an instance of class A called “a”, we can assert to test if a.MyMethod() returns 3.  If it does, then we know that the mocked object was called by object “a” instead of a real object of class A (since that always returns a 5).

Where to Get the Code

As always you can get the latest code used by this blog post at my GitHub account by clicking here.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

 

Unit Testing with Moq

Introduction

There are a lot of articles on how to use Moq, but I’m going to bring out my die roller game example to show how to use Moq to roll a sequence of predetermined results.  I’m also going to do this using .Net Core.

The Setup

My sample program is a game.  The game is actually empty, because I want to show the minimal code to demonstrate Moq itself.  So let’s pretend there is a game object and it uses a die roll object to get a random outcome.  For those who have never programmed a game before, a die roll can be used to determine offense or defense of one battle unit attacking another in a turn-based board game.  However, unit tests must be repeatable and we must make sure we test as much code as possible (maximize our code coverage).

The sample project uses a Game object that is dependent on the DieRoller object.  To break dependencies, I required an instance of the DieRoller object to be fed into the Game object’s constructor:

public class Game
{
    private IDieRoller _dieRoller;

    public Game(IDieRoller dieRoller)
    {
        _dieRoller = dieRoller;
    }

    public int Play()
    {
        return _dieRoller.DieRoll();
    }
}

Now I can feed a Moq object into the Game object and control what the die roll will be.  For the game itself, I can use the actual DieRoller object by default:

public static void Main(string[] args)
{
    var game = new Game(new DieRoller());
}

An IOC container could be used as well, and I would highly recommend it for a real project.  I’ll skip the IOC container for this blog post.

The unit test can look something like this:

[Fact]
public void test_one_die_roll()
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(2);

	var game = new Game(dieRoller.Object);
	var result = game.Play();
	Assert.Equal(2, result);
}

I’m using xunit and moq in the above example.  So for my .Net Core project.json file:

{
	"version": "1.0.0-*",
	"testRunner": "xunit",
	"dependencies": {
		"DieRollerLibrary": "1.0.0-*",
		"GameLibrary": "1.0.0-*",
		"Microsoft.NETCore.App": {
			"type": "platform",
			"version": "1.0.1"
	},
	"Moq": "4.6.38-alpha",
	"xunit": "2.2.0-beta2-build3300",
	"xunit.core": "2.2.0-beta2-build3300",
	"dotnet-test-xunit": "2.2.0-preview2-build1029",
	"xunit.runner.visualstudio": "2.2.0-beta2-build1149"
},

"frameworks": {
	"netcoreapp1.0": {
		"imports": "dnxcore50"
		}
	}
}

 

Make sure you check the versions of these packages since they are constantly changing as of this blog post.  It’s probably best to use the NuGet package window or the console to get the latest version.

Breaking Dependencies

What does Moq do?  Moq is a quick and dirty way to create a fake object instance without writing a fake object.  Moq can take an interface or object definition and create a local instance with outputs that you can control.  In the XUnit sample above, Moq is told to return the number 2 when the DieRoll() method is called.  

Why mock an object?  As you create code, you’ll end up with objects that call other objectsThese cause dependencies.  In this example, the Game object is dependent on the DieRoller object:

 

Each object should have it’s own unit tests.  If we are testing two or more objects that are connected together, then technically, we’re performing an integration test.  To break dependencies, we need all objects not under test to be faked or mocked out.  If the Game object has multiple paths (using if/then, case statements for example) that depend on the roll of the die, then we’ll need to create unit tests where we can fix the die roll to a known set of values and execute the Game object to see the expected results.

First, I’m going to add a method to the Game class that will determine the outcome of an attack.  If the die roll is greater than 4, then the attack is successful (unit is hit).  If the die roll is 4 or less, then it’s a miss.  I’ll use true for a hit and false for a miss.  Here is my new Game class:

public class Game
{
    private IDieRoller _dieRoller;

	public Game(IDieRoller dieRoller)
	{
		_dieRoller = dieRoller;
	}

	public int Play()
	{
		return _dieRoller.DieRoll();
	}
	 
	public bool Attack()
	{
		if (_dieRoller.DieRoll() > 4)
		{
			return true;
		}
		
		return false;
	}
}


Now if we define a unit test like this:

[Theory]
[InlineData(1)]
[InlineData(2)]
[InlineData(3)]
[InlineData(4)]
public void test_attack_unsuccessful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.False(result);
}


We can test all instances where the die roll should produce a false result.  To make sure we have full coverage, we’ll need to test the other two die results (where the die is a 5 or a 6):

[Theory]
[InlineData(5)]
[InlineData(6)]
public void test_attack_successful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.True(result);
}

Another Example

Now I’m going to make it complicated.  Sometimes in board games, we use two die rolls to determine an outcome.  First, I’m going to define an enum to allow three distinct results of an attack:

public enum AttackResult
{
	Miss,
	Destroyed,
	Damaged
}


Next, I’m going to create a new method named Attack2():

public AttackResult Attack2()
{
	if (_dieRoller.DieRoll() > 4)
	{
		if (_dieRoller.DieRoll() > 3)
		{
			return AttackResult.Damaged;
		}
		return AttackResult.Destroyed;
	}
	return AttackResult.Miss;
}


As you can see, the die could be rolled up to two times.  So, in order to test your results, you’ll need to fake two rolls before calling the game object.   I’m going to use the “theory” XUnit attribute to feed values that represent a damaged unit.  The values need to be the following:

5,4
5,5
5,6
6,4
6,5
6,6

Moq has a SetupSequence() method that allows us to stack predetermined results to return.  So every time the mock object is called, the next value will be returned.  Here’s the XUnit test to handle all die rolls that would result with an AttackReuslt of damaged:

[Theory]
[InlineData(5, 4)]
[InlineData(5, 5)]
[InlineData(5, 6)]
[InlineData(6, 4)]
[InlineData(6, 5)]
[InlineData(6, 6)]
public void test_attack_damaged(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Damaged, result);
}

Next, the unit testing for instances where the Attack2() method returns a AttackResult  of destroyed:

[Theory]
[InlineData(5, 1)]
[InlineData(5, 2)]
[InlineData(5, 3)]
[InlineData(6, 1)]
[InlineData(6, 2)]
[InlineData(6, 2)]
public void test_attack_destroyed(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Destroyed, result);
}

And finally, the instances where the AttackResult is a miss:

[Theory]
[InlineData(1, 1)]
[InlineData(2, 2)]
[InlineData(3, 3)]
[InlineData(4, 1)]
public void test_attack_miss(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Miss, result);
}

In the instance of the miss, the second die roll doesn’t really matter and technically, the unit test could be cut back to one input.  To test for every possible case, we could feed all six values into the second die.  Why would be do that?  Unit tests are performed for more than one reason.  Initially, they are created to prove our code as we write it.  Test-driven development is centered around this concept.  However, we also have to recognize that after the code is completed and deployed, the unit tests become regression tests.  These tests should live with the code for the life of the code.  The tests should also be incorporated into your continuous integration environment and executed every time code is checked into your version control system (technically, you should execute the tests every time you build, but your build times might be too long to do this).  This will prevent future code changes from accidentally breaking code that was already developed and tested.  In the Attack2() method a developer could enhance the code to use the second die roll when the first die roll is a 1,2,3 or 4.  The unit test above will not necessarily catch this change.  The only thing worse than a broken unit test is one that passes when it shouldn’t.

With that said, you should not have to perform an exhaustive test on every piece of code in your program.  I would only recommend such a tactic if the input data set was small enough to be reasonable.  For the example case above, the die size is 6 and the “Theory” attribute cuts the code you’ll need in order to perform multiple unit tests.  If you are using Microsoft Tests, then you can setup a loop that does the same function as the “Theory” attribute and test all iterations for one expected output in each unit test.


Where to get the Sample Code

You can download the sample code from my GitHub account by clicking here.

 

Dependency Injection and IOC Containers

Summary

I’ve done quite a few posts on unit testing in the past.  I keep a list of subjects that I would like to blog about so I have a ready list to choose from.  My list of unit testing subjects is getting large and it’s time to clear the spindle.  So in this post I’m going to do some deep diving on Dependency Injection and introduce Inversion Of Control using Autofac.

The Die Roller

I created a simple program a while back that does a die roll (you can find it by clicking here). I had hoped to write some follow up posts about other methods that can be used to get around the problem of object dependency, but other blog subjects grabbed my interest and took up my time.  So now I’m going to go back and discuss other techniques that I know in order to break or eliminate dependencies in objects.

First, I’m going to show a technique that uses a singleton.  The idea behind this design pattern is to provide a default object that will self-instantiate when it is called from the main program, but provide an entry point (a setter) that will allow the object to be overridden by a fake object in a unit test before the object under test is called.  I’ve blogged about this technique in this post where I described a technique to design a caching system. 

The base object looks like this:

public abstract class DieRollerBase
{
	public static DieRollerBase _Instance;

	public static DieRollerBase Instance
	{
		get
		{
			if (_Instance == null)
			{
				_Instance = new DieRoller();
			}

			return _Instance;
		}

		set
		{
			_Instance = value;
		}
	}

	public static int DieRoll()
	{
		return Instance.ReturnDieRoll();
	}

	public abstract int ReturnDieRoll();
}

The die roller object, which is run inside the main program looks like this:

public class DieRoller : DieRollerBase
{
	private Random RandomNumberGenerator = new Random(DateTime.Now.Millisecond);

	public override int ReturnDieRoll()
	{
		return RandomNumberGenerator.Next() % 6;
	}
}

As you can see, the base class instantiates a new DieRoller() object, instead of a DieRollerBase object.  What happens is the main program will call the die roller using the following syntax:

int result = DieRoller.DieRoll();

The call to the method DieRoll() is static, but it calls the instance method called ReturnDieRoll() which is implemented inside the sub-class, not the base class.  The reason for doing this is that we can override the DieRoll() class with a fake class like this:

public class FakeDieRoller : DieRollerBase
{
	private static int _NextDieRoll = 0;
	private static List _SetDieRoll = new List();
	public static int SetDieRoll
	{
		get
		{
			int nextDieRoll = _SetDieRoll[_NextDieRoll];
			_NextDieRoll++;
			if (_NextDieRoll >= _SetDieRoll.Count)
			{
				_NextDieRoll = 0;
			}

			return nextDieRoll;
		}
		set
		{
			_SetDieRoll.Add(value);
		}
	}

	public static void ClearDieRoll()
	{
		_SetDieRoll.Clear();
		_NextDieRoll = 0;
	}

	public override int ReturnDieRoll()
	{
		return SetDieRoll;
	}
}

Using the setter of the base class for the instance, we can do this in our unit test:

DieRoller.Instance = new FakeDieRoller();
Any method calling the die roller will end up executing the fake class instead of the default class.  The reason for doing this is so we can load the dice by stuffing “known” die roll numbers into the dice before calling our object under test.  Then we can get predictable results from objects that use the random die roll object.


Analysis

In my earlier blog post I created a die roller class that looked like this:

public static class DieRoller
{
	private static Random RandomNumberGenerator = new Random(DateTime.Now.Millisecond);

	public static int DieRoll()
	{
		if (UnitTestHelpers.IsInUnitTest)
		{
			return UnitTestHelpers.SetDieRoll;
		}
		else
		{
			return RandomNumberGenerator.Next() % 6;
		}
	}
}


The injection took place using the UnitTestHelpers object.  This tests if the startup dll was a microsoft test object and then executed a built-in fake die.  This is not a clean technique for unit testing since there is some test code compiled into the distributed dlls.  Mainly, the UnitTestHelpers.SetDieRoll method.

The singleton method is much cleaner, because the fake object can be created inside the unit test project and not distributed with the production dlls.  Therefore the final code will not contain the fake die object or any of the test code.  The problem with singletons is that they are complicated to design.

There is a better technique.  It’s called Inversion Of Control or IOC.  The idea behind inversion of control is that objects are created independent of each other, then they are “wired” together at program initialization time.  Unit tests can link fake objects before the tests are executed, which automatically bypasses the dependent objects that are not under test.  This approach is cleaner and I’m going to show the die roller using the Autofac IOC container.

Setting up the Solution

Autofac has an object called the container.  The container is like a dictionary storage place where all the classes and interfaces are stored when the program is initialized.  Then the resolve command uses the container information to match which class is setup for each interface.  Inside your class, you’ll call the resolve command and pass the interface, without any reference to the object class itself.  This allows Autofac to set which class will be used for the interface when is needed.  By doing this, we can setup a different class (like a fake class) inside a unit test and the object under test will call the resolve command with the same interface but the fake object will already be instantiated by Autofac to be used.

So here are the projects I used in my little demo program:

Container
DieRollerAutoFac
DieRollerLibrary
GameLibrary
DieRollerTests

The program itself will start from the DieRollerAutoFac project.  This is just a console application that initializes the IOC container and runs the game.  The IOC container is stored in a static class called IOCContainer and it’s inside the “Container” library.  The reason I structured it this way is so I can use the container for the program and I can use the container for the unit tests.  I also needed the container for the game class when it performs the resolve operation.  So the container must be in a different project to keep it from being dependent on the game class or the main program.
Next, I created the die roller class and interface inside it’s own project.  This could be contained inside the GameLibrary project, but I’m going to pretend that we want to isolate modules (i.e. dlls).

Next we need to wire everything up.  If you download the sample code and look at the main program, you’ll see this piece of code:

var builder = new ContainerBuilder();
builder.RegisterType<DieRoller>().As<IDieRoller>();
IOCContainer.Container = builder.Build();


This is the code that builds the container.  Once the container is built, it can be used by any object in the program.

Inside the game class the container is used to resolve the die object:

public class Game
{
	public int Play()
	{
		using (var scope = IOCContainer.Container.BeginLifetimeScope())
		{
			var die = scope.Resolve();

			return die.DieRoll();
		}
	}
}

To substitute a fake die class in your unit tests, you can do this:

var builder = new ContainerBuilder();
builder.RegisterType<FakeDieRoller>().As<IDieRoller>();
IOCContainer.Container = builder.Build();


Checklist of IOC container rules:
1. Use the NuGet manager to install Autofac.  
2. Make sure you create an interface for each object you intend to use in your IOC container.
3. Setup a container creator in your unit tests with fake or mock objects.

Where to get the code

As always I have posted all the code from this blog post.  You can download the source by clicking here