XML Serialization

Summary

In this post I’m going to demonstrate the proper way to serialize XML and setup unit tests using xUnit and .Net Core.  I will also be using Visual Studio 2017.

Generating XML

JSON is rapidly taking over as the data encoding standard of choice.  Unfortunately, government agencies are decades behind the technology curve and XML is going to be around for a long time to come.  One of the largest industries industries still using XML for a majority of their data transfer encoding is the medical industry.  Documents required by meaningful use are mostly encoded in XML.  I’m not going to jump into the gory details of generating a CCD.  Instead, I’m going to keep this really simple.

First, I’m going to show a method of generating XML that I’ve seen many times.  Usually coded by a programmer with little or no formal education in Computer Science.  Sometimes programmers just take a short-cut because it appears to be the simplest way to get the product out the door.  So I’ll show the technique and then I’ll explain why it turns out that this is a very poor way of designing an XML generator.

Let’s say for instance we wanted to generate XML representing a house.  First we’ll define the house as a record that can contain square footage.  That will be the only data point assigned to the house record (I mentioned this was going to be simple right).  Inside of the house record will be lists of walls and lists of roofs (assume a house could have two or more roofs like a tri-level configuration).  Next, I’m going to make a list of windows for the walls.  The window block will have a “Type” that is a free-form string input and the roof block will also have a “Type” that is a free-form string.  That is the whole definition.

public class House
{
  public List Walls = new List();
  public List Roofs = new List();
  public int Size { get; set; }
}

public class Wall
{
  public List Windows { get; set; }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

The “easy” way to create XML from this is to use the StringBuilder and just build XML tags around the data in your structure.  Here’s a sample of the possible code that a programmer might use:

public class House
{
  public List<Wall> Walls = new List<Wall>();
  public List<Roof> Roofs = new List<Roof>();
  public int Size { get; set; }

  public string Serialize()
  {
    var @out = new StringBuilder();

    @out.Append("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
    @out.Append("<House xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\">");

    foreach (var wall in Walls)
    {
      wall.Serialize(ref @out);
    }

    foreach (var roof in Roofs)
    {
      roof.Serialize(ref @out);
    }

    @out.Append("<size>");
    @out.Append(Size);
    @out.Append("</size>");

    @out.Append("</House>");

    return @out.ToString();
  }
}

public class Wall
{
  public List<Window> Windows { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    if (Windows == null || Windows.Count == 0)
    {
      @out.Append("<wall />");
      return;
    }

    @out.Append("<wall>");
    foreach (var window in Windows)
    {
      window.Serialize(ref @out);
    }
    @out.Append("</wall>");
  }
}

public class Window
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<window>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</window>");
  }
}

public class Roof
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<roof>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</roof>");
  }
}

The example I’ve given is a rather clean example.  I have seen XML generated with much uglier code.  This is the manual method of serializing XML.  One almost obvious weakness is that the output produced is a straight line of XML, which is not human-readable.  In order to allow human readable XML output to be produced with an on/off switch, extra logic will need to be incorporated that would append the newline and add tabs for indents.  Another problem with this method is that it contains a lot of code that is unnecessary.  One typo and the XML is incorrect.  Future editing is hazardous because tags might not match up if code is inserted in the middle and care is not taken to test such conditions.  Unit testing something like this is an absolute must.

The easy method is to use the XML serializer.  To produce the correct output, it is sometimes necessary to add attributes to properties in objects to be serialized.  Here is the object definition that produces the same output:

public class House
{
  [XmlElement(ElementName = "wall")]
  public List Walls = new List();

  [XmlElement(ElementName = "roof")]
  public List Roofs = new List();

  [XmlElement(ElementName = "size")]
  public int Size { get; set; }
}

public class Wall
{
  [XmlElement(ElementName = "window")]
  public List Windows { get; set; }

  public bool ShouldSerializenullable()
  {
    return Windows == null;
  }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

In order to serialize the above objects into XML, you use the XMLSerializer object:

public static class CreateXMLData
{
  public static string Serialize(this House house)
  {
    var xmlSerializer = new XmlSerializer(typeof(House));

    var settings = new XmlWriterSettings
    {
      NewLineHandling = NewLineHandling.Entitize,
      IndentChars = "\t",
      Indent = true
    };

    using (var stringWriter = new Utf8StringWriter())
    {
      var writer = XmlWriter.Create(stringWriter, settings);
      xmlSerializer.Serialize(writer, house);

      return stringWriter.GetStringBuilder().ToString();
    }
  }
}

You’ll also need to create a Utf8StringWriter Class:

public class Utf8StringWriter : StringWriter
{
  public override Encoding Encoding
  {
    get { return Encoding.UTF8; }
  }
}

Unit Testing

I would recommend unit testing each section of your XML.  Test with sections empty as well as containing one or more items.  You want to make sure you capture instances of null lists or empty items that should not generate XML output.  If there are any special attributes, make sure that the XML generated matches the specification.  For my unit testing, I stripped newlines and tabs to compare with a sample XML file that is stored in my unit test project.  As a first-attempt, I created a helper for my unit tests:

public static class XmlResultCompare
{
  public static string ReadExpectedXml(string expectedDataFile)
  {
    var assembly = Assembly.GetExecutingAssembly();
    using (var stream = assembly.GetManifestResourceStream(expectedDataFile))
    {
      using (var reader = new StreamReader(stream))
      {
        return reader.ReadToEnd().RemoveWhiteSpace();
      }
    }
  }

  public static string RemoveWhiteSpace(this string s)
  {
    s = s.Replace("\t", "");
    s = s.Replace("\r", "");
    s = s.Replace("\n", "");
  return s;
  }
}

If you look carefully, I ‘m compiling my xml test data right into the unit test dll.  Why am I doing that?  The company that I work for as well as most serious companies use continuous integration tools such as a build server.  The problem with a build server is that your files might not make it to the same directory location on the build server that they are on your PC.  To ensure that the test files are there, compile them into the dll and reference them from the namespace using Assembly.GetExecutingAssembly().  To make this work, you’ll have to mark your xml test files as an Embedded Resource (click on the xml file and change the Build Action property to Embedded Resource).  To access the files, which are contained in a virtual directory called “TestData”, you’ll need to use the name space, the virtual directory and the full file name:

XMLCreatorTests.TestData.XMLHouseOneWallOneWindow.xml

Now for a sample unit test:

[Fact]
public void TestOneWallNoWindow()
{
  // one wall, no windows
  var house = new House { Size = 2000 };
  house.Walls.Add(new Wall());

  Assert.Equal(XmlResultCompare.ReadExpectedXml("XMLCreatorTests.TestData.XMLHouseOneWallNoWindow.xml"), house.Serialize().RemoveWhiteSpace());
}

Notice how I filled in the house object with the size and added one wall.  The ReadExpectedXml() method will remove whitespaces automatically, so it’s important to remove them off the serialized version of house in order to match.

Where to Get the Code

As always you can go to my GitHub account and download the sample application (click here).  I would recommend downloading the application and modifying it as a test to see how all the piece work.  Add a unit test to see if you can match your expected xml with the xml serializer.

 

 

 

The Case for Unit Tests

Introduction

I’ve written a lot of posts on how to unit test, break dependencies, mocking objects, creating fakes, dependency injection and IOC containers.  I am a huge advocate of writing unit tests.  Unit tests are not the solution to everything, but they do solve a large number of problems that occur in software that is not unit tested.  In this post, I’m going to build a case for unit testing.

Purpose of Unit Tests

First, I’m going to assume that the person reading this post is not sold on the idea of unit tests.  So let me start by defining what a unit test is and what is not a unit test.  Then I’ll move on to defining the process of unit testing and how unit tests can save developers a lot of time.

A unit test is a tiny, simple test on a method or logic element in your software.  The goal is to create a test for each logical purpose that your code performs.  For a given “feature” you might have a hundred unit tests (more or less, depending on how complex the feature is).  For a method, you could have one, a dozen or hundreds of unit tests.  You’ll need to make sure you can cover different cases that can occur for the inputs to your methods and test for the appropriate outputs.  Here’s a list of what you should unit test:

  • Fence-post inputs.
  • Obtain full code coverage.
  • Nullable inputs.
  • Zero or empty string inputs.
  • Illegal inputs.
  • Representative set of legal inputs.

Let me explain what all of this means.  Fence-post inputs are dependent on the input data type.  If you are expecting an integer, what happens when you input a zero?  What about the maximum possible integer (int.MaxValue)?  What about minimum integer (int.MinValue)?

Obtain full coverage means that you want to make sure you hit all the code that is inside your “if” statements as well as the “else” portion.  Here’s an example of a method:

public class MyClass
{
    public int MyMethod(int input1)
    {
        if (input1 == 0)
        {
            return 4;
        }
        else if (input1 > 0)
        {
            return 2;
        }
        return input1;
    }
}

How many unit tests would you need to cover all the code in this method?  You would need three:

  1. Test with input1 = 0, that will cover the code up to the “return 4;”
  2. Test with input = 1 or greater, that will cover the code to “return 2;”
  3. Test with input = -1 or less, that will cover the final “return input1;” line of code.

That will get you full coverage.  In addition to those three tests, you should account for min and max int values.  This is a trivial example, so min and max tests are overkill.  For larger code you might want to make sure that someone doesn’t break your code by changing the input data type.  Anyone changing the data type from int to something else would get failed unit tests that will indicate that they need to review the code changes they are performing and either fix the code or update the unit tests to provide coverage for the redefined input type.

Nullable data can be a real problem.  Many programmers don’t account for all null inputs.  If you are using an input type that can have null data, then you need to account for what will happen to your code when it receives that input type.

The number zero can have bad consequences.  If someone adds code and the input is in the denominator, then you’ll get a divide by zero error, and you should catch that problem before your code crashes.  Even if you are not performing a divide, you should probably test for zero, to protect a future programmer from adding code to divide and cause an error.  You don’t necessarily have to provide code in your method to handle zero.  The example above just returns the number 4.  But, if you setup a unit test with a zero for an input, and you know what to expect as your output, then that will suffice.  Any future programmer that adds a divide with that integer and doesn’t catch the zero will get a nasty surprise when they execute the unit tests.

If your method allows input data types like “string”, then you should check for illegal characters.  Does your method handle carriage returns?  Unprintable characters?  What about an empty string?  Strings can be null as well.

Don’t forget to test for your legal data.  The three tests in the previous example test for three different legal inputs.

Fixing Bugs

The process of creating unit tests should occur as you are creating objects.  In fact, you should constantly think in terms of how you’re going to unit test your object, before you start writing your object.  Creating software is a lot like a sausage factory and even I write objects before unit tests as well as the other way around.  I prefer to create an empty object and some proposed methods that I’ll be creating.  Just a small shell with maybe one or two methods that I want to start with.  Then I’ll think up unit tests that I’ll need ahead of time.  Then I add some code and that might trigger a thought for another unit test.  The unit tests go with the code that you are writing and it’s much easier to write the unit tests before or just after you create a small piece of code.  That’s because the code you just created is fresh in your mind and you know what it’s supposed to do.

Now you have a monster that was created over several sprints.  Thousands of lines of code and four hundred unit tests.  You deploy your code to a Quality environment and a QA person discovers a bug.  Something you would have never thought about, but it’s an easy fix.  Yeah, it was something stupid, and the fix will take about two seconds and you’re done!

Not so fast!  If you find a bug, create a unit test first.  Make sure the unit test triggers the bug.  If this is something that blew up one of your objects, then you need to create one or more unit tests that feeds the same input into your object and forces it to blow up.  Then fix the bug.  The unit test(s) should pass.

Now why did we bother?  If you’re a seasoned developer like me, there have been numerous times that another developer unfixes your bug fix.  It happens so often, that I’m never surprised when it does happen.  Maybe your fix caused an issue that was unreported.  Another developer secretly fixes your bug by undoing your fix, not realizing that they are unfixing a bug.  If you put a unit test in to account for a bug, then a developer that unfixes the bug will get an error from your unit test.  If your unit test is named descriptively, then that developer will realize that he/she is doing something wrong.  This episode just performed a regression test on your object.

Building Unit Tests is Hard!

At first unit tests are difficult to build.  The problem with unit testing has more to do with object dependency than with the idea of unit testing.  First, you need to learn how to write code that isn’t tightly coupled.  You can do this by using an IOC container.  In fact, if you’re not using an IOC container, then you’re just writing legacy code.  Somewhere down the line, some poor developer is going to have to “fix” your code so that they can create unit tests.

The next most difficult concept to overcome is learning how to mock or fake an object that is not being unit tested.  These can be devices, like database access, file I/O, smtp drivers, etc.  For devices, learn how to use interfaces and wrappers.  Then you can use Moq to mock your unit tests.

Unit Tests are Small

You need to be conscious of what you are unit testing.  Don’t create a unit test that checks a whole string of objects at once (unless you want to consider those as integration tests).  Limit your unit tests to the smallest amount of code you need in order to test your functionality.  No need to be fancy.  Just simple.  Your unit tests should run fast.  Many slow running unit tests bring no benefit to the quality of your product.  Developers will avoid running unit tests if it takes 10 minutes to run them all.  If your unit tests are taking too long to run, you’ll need to analyze what should be scaled back.  Maybe your program is too large and should be broken into smaller pieces (like APIs).

There are other reasons to keep your unit tests small and simple: Some day one or more unit tests are going to fail.  The developer modifying code will need to look at the failing unit test and analyze what it is testing.  The quicker a developer can analyze and determine what is being tested, the quicker he/she can fix the bug that was caused, or update the unit test for the new functionality.  A philosophy of keeping code small should translate into your entire programming work pattern.  Keep your methods small as well.  That will keep your code from being nested too deep.  Make sure your methods server a single purpose.  That will make unit testing easier.

A unit test only tests methods of one object.  The only time you’ll break other objects is if you add parameters to your object or public methods/parameters.  If you change something to a private method, only unit tests for the object you’re working on will fail.

Run Unit Tests Often

For a continuous integration environment, your unit tests should run right after you build.  If you have a build serer (and you should), your build server must run the unit tests.  If your tests do not pass, then the build needs to be marked as broken.  If you only run your unit tests after you end your sprint, then you’re going to be in for a nasty surprise when hundreds of unit tests fail and you need to spend days trying to fix all the problems.  Your programming pattern should be: Type some code, build, test, repeat.  If you test after each build, then you’ll catch mistakes as you make them.  Your failing unit tests will be minimal and you can fix your problem while you are focused on the logic that caused the failure.

Learning to Unit Test

There are a lot of resources on the Internet for the subject of unit testing.  I have written many blog posts on the subject that you can study by clicking on the following links:

 

Mocking Your File System

Introduction

In this post, I’m going to talk about basic dependency injection and mocking a method that is used to access hardware.  The method I’ll be mocking is the System.IO.Directory.Exists().

Mocking Methods

One of the biggest headaches with unit testing is that you have to make sure you mock any objects that your method under test is calling.  Otherwise your test results could be dependent on something you’re not really testing.  As an example for this blog post, I will show how to apply unit tests to this very simple program:

class Program
{
    static void Main(string[] args)
    {
        var myObject = new MyClass();
        Console.WriteLine(myObject.MyMethod());
        Console.ReadKey();
    }
}

The object that is used above is:

public class MyClass
{
    public int MyMethod()
    {
        if (System.IO.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

Now, we want to create two unit tests to cover all the code in the MyMethod() method.  Here’s an attempt at one unit test:

[TestMethod]
public void test_temp_directory_exists()
{
    var myObject = new MyClass();
    Assert.AreEqual(3, myObject.MyMethod());
}

The problem with this unit test is that it will pass if your computer contains the c:\temp directory.  If your computer doesn’t contain c:\temp, then it will always fail.  If you’re using a continuous integration environment, you can’t control if the directory exists or not.  To compound the problem you really need test both possibilities to get full test coverage of your method.  Adding a unit test to cover the case where c:\temp to your test suite would guarantee that one test would pass and the other fail.

The newcomer to unit testing might think: “I could just add code to my unit tests to create or delete that directory before the test runs!”  Except, that would be a unit test that modifies your machine.  The behavior would destroy anything you have in your c:\temp directory if you happen to use that directory for something.  Unit tests should not modify anything outside the unit test itself.  A unit test should never modify database data.  A unit test should not modify files on your system.  You should avoid creating physical files if possible, even temp files because temp file usage will make your unit tests slower.

Unfortunately, you can’t just mock System.IO.Directory.Exists().  The way to get around this is to create a wrapper object, then inject the object into MyClass and then you can use Moq to mock your wrapper object to be used for unit testing only.  Your program will not change, it will still call MyClass as before.  Here’s the wrapper object and an interface to go with it:

public class FileSystem : IFileSystem
{
  public bool DirectoryExists(string directoryName)
  {
    return System.IO.Directory.Exists(directoryName);
  }
}

public interface IFileSystem
{
    bool DirectoryExists(string directoryName);
}

Your next step is to provide an injection point into your existing class (MyClass).  You can do this by creating two constructors, the default constructor that initializes this object for use by your method and a constructor that expects a parameter of IFileSystem.  The constructor with the IFileSystem parameter will only be used by your unit test.  That is where you will pass along a mocked version of your filesystem object with known return values.  Here are the modifications to the MyClass object:

public class MyClass
{
    private readonly IFileSystem _fileSystem;

    public MyClass(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public MyClass()
    {
        _fileSystem = new FileSystem();
    }

    public int MyMethod()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

This is the point where your program should operate as normal.  Notice how I did not need to modify the original call to MyClass that occurred at the “Main()” of the program.  The MyClass() object will create a IFileSystem wrapper instance and use that object instead of calling System.IO.Directory.Exists().  The result will be the same.  The difference is that now, you can create two unit tests with mocked versions of IFileSystem in order to test both possible outcomes of the existence of “c:\temp”.  Here is an example of the two unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(3, myObject.MyMethod());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(5, myObject.MyMethod());
}

Make sure you include the NuGet package for Moq.  You’ll notice that in the first unit test, we’re testing MyClass with a mocked up version of a system where “c:\temp” exists.  In the second unit test, the mock returns false for the directory exists check.

One thing to note: You must provide a matching input on x.DirectoryExists() in the mock setup.  If it doesn’t match what is used in the method, then you will not get the results you expect.  In this example, the directory being checked is hard-coded in the method and we know that it is “c:\temp”, so that’s how I mocked it.  If there is a parameter that is passed into the method, then you can mock some test value, and pass the same test value into your method to make sure it matches (the actual test parameter doesn’t matter for the unit test, only the results).

Using an IOC Container

This sample is setup to be extremely simple.  I’m assuming that you have existing .Net legacy code and you’re attempting to add unit tests to the code.  Normally, legacy code is hopelessly un-unit testable.  In other words, it’s usually not worth the effort to apply unit tests because of the tightly coupled nature of legacy code.  There are situations where legacy code is not too difficult to add unit testing.  This can occur if the code is relatively new and the developer(s) took some care in how they built the code.  If you are building new code, you can use this same technique from the beginning, but you should also plan your entire project to use an IOC container.  I would not recommend refactoring an existing project to use an IOC container.  That is a level of madness that I have attempted more than once with many man-hours of wasted time trying to figure out what is wrong with the scoping of my objects.

If your code is relatively new and you have refactored to use contructors as your injection points, you might be able to adapt to an IOC container.  If you are building your code from the ground up, you need to use an IOC container.  Do it now and save yourself the headache of trying to figure out how to inject objects three levels deep.  What am I talking about?  Here’s an example of a program that is tightly coupled:

class Program
{
    static void Main(string[] args)
    {
        var myRootClass = new MyRootClass();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}
public class MyRootClass
{
  readonly ChildClass _childClass = new ChildClass();

  public bool CountExceeded()
  {
    if (_childClass.TotalNumbers() > 5)
    {
        return true;
    }
    return false;
  }

  public void Increment()
  {
    _childClass.IncrementIfTempDirectoryExists();
  }
}

public class ChildClass
{
    private int _myNumber;

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (System.IO.Directory.Exists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

The example code above is very typical legacy code.  The “Main()” calls the first object called “MyRootClass()”, then that object calls a child class that uses System.IO.Directory.Exists().  You can use the previous example to unit test the ChildClass for examples when c:\temp exist and when it doesn’t exist.  When you start to unit test MyRootClass, there’s a nasty surprise.  How to you inject your directory wrapper into that class?  If you have to inject class wrappers and mocked classes of every child class of a class, the constructor of a class could become incredibly large.  This is where IOC containers come to the rescue.

As I’ve explained in other blog posts, an IOC container is like a dictionary of your objects.  When you create your objects, you must create a matching interface for the object.  The index of the IOC dictionary is the interface name that represents your object.  Then you only call other objects using the interface as your data type and ask the IOC container for the object that is in the dictionary.  I’m going to make up a simple IOC container object just for demonstration purposes.  Do not use this for your code, use something like AutoFac for your IOC container.  This sample is just to show the concept of how it all works.  Here’s the container object:

public class IOCContainer
{
  private static readonly Dictionary<string,object> ClassList = new Dictionary<string, object>();
  private static IOCContainer _instance;

  public static IOCContainer Instance => _instance ?? (_instance = new IOCContainer());

  public void AddObject<T>(string interfaceName, T theObject)
  {
    ClassList.Add(interfaceName,theObject);
  }

  public object GetObject(string interfaceName)
  {
    return ClassList[interfaceName];
  }

  public void Clear()
  {
    ClassList.Clear();
  }
}

This object is a singleton object (global object) so that it can be used by any object in your project/solution.  Basically it’s a container that holds all pointers to your object instances.  This is a very simple example, so I’m going to ignore scoping for now.  I’m going to assume that all your objects contain no special dependent initialization code.  In a real-world example, you’ll have to analyze what is initialized when your objects are created and determine how to setup the scoping in the IOC container.  AutoFac has options of when the object will be created.  This example creates all the objects before the program starts to execute.  There are many reasons why you might not want to create an object until it’s actually used.  Keep that in mind when you are looking at this simple example program.

In order to use the above container, we’ll need to use the same FileSystem object and interface from the prevous program.  Then create an interface for MyRootObject and ChildObject.  Next, you’ll need to go through your program and find every location where an object is instantiated (look for the “new” command).  Replace those instances like this:

public class ChildClass : IChildClass
{
    private int _myNumber;
    private readonly IFileSystem _fileSystem = (IFileSystem)IOCContainer.Instance.GetObject("IFileSystem");

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

Instead of creating a new instance of FileSystem, you’ll ask the IOC container to give you the instance that was created for the interface called IFileSystem.  Notice how there is no injection in this object.  AutoFac and other IOC containers have facilities to perform constructor injection automatically.  I don’t want to introduce that level of complexity in this example, so for now I’ll just pretend that we need to go to the IOC container object directly for the main program as well as the unit tests.  You should be able to see the pattern from this example.

Once all your classes are updated to use the IOC container, you’ll need to change your “Main()” to setup the container.  I changed the Main() method like this:

static void Main(string[] args)
{
    ContainerSetup();

    var myRootClass = (IMyRootClass)IOCContainer.Instance.GetObject("IMyRootClass");
    myRootClass.Increment();

    Console.WriteLine(myRootClass.CountExceeded());
    Console.ReadKey();
}

private static void ContainerSetup()
{
    IOCContainer.Instance.AddObject<IChildClass>("IChildClass",new ChildClass());
    IOCContainer.Instance.AddObject<IMyRootClass>("IMyRootClass",new MyRootClass());
    IOCContainer.Instance.AddObject<IFileSystem>("IFileSystem", new FileSystem());
}

Technically the MyRootClass object does not need to be included in the IOC container since no other object is dependent on it.  I included it to demonstrate that all objects should be inserted into the IOC container and referenced from the instance in the container.  This is the design pattern used by IOC containers.  Now we can write the following unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(true,myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

In these unit tests, we put the mocked up object used by the object under test into the IOC container.  I have provided a “Clear()” method to reset the IOC container for the next test.  When you use AutoFac or other IOC containers, you will not need the container object in your unit tests.  That’s because IOC containers like the one built into .Net Core and AutoFac use the constructor of the object to perform injection automatically.  That makes your unit tests easier because you just use the constructor to inject your mocked up object and test your object.  Your program uses the IOC container to magically inject the correct object according to the interface used by your constructor.

Using AutoFac

Take the previous example and create a new constructor for each class and pass the interface as a parameter into the object like this:

private readonly IFileSystem _fileSystem;

public ChildClass(IFileSystem fileSystem)
{
    _fileSystem = fileSystem;
}

Instead of asking the IOC container for the object that matches the interface IFileSystem, I have only setup the object to expect the fileSystem object to be passed in as a parameter to the class constructor.  Make this change for each class in your project.  Next, change your main program to include AutoFac (NuGet package) and refactor your IOC container setup to look like this:

static void Main(string[] args)
{
    IOCContainer.Setup();

    using (var myLifetime = IOCContainer.Container.BeginLifetimeScope())
    {
        var myRootClass = myLifetime.Resolve<IMyRootClass>();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}

public static class IOCContainer
{
    public static IContainer Container { get; set; }

    public static void Setup()
    {
        var builder = new ContainerBuilder();

        builder.Register(x => new FileSystem())
            .As<IFileSystem>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new ChildClass(x.Resolve<IFileSystem>()))
            .As<IChildClass>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new MyRootClass(x.Resolve<IChildClass>()))
            .As<IMyRootClass>()
            .PropertiesAutowired()
            .SingleInstance();

        Container = builder.Build();
    }
}

I have ordered the builder.Register command from innner most to the outer most object classes.  This is not really necessary since the resolve will not occur until the IOC container is called by the object to be used.  In other words, you can define the MyRootClass first, followed by FileSystem and ChildClass, or in any order you want.  The Register command is just storing your definition of which physical object will be represented by each interface and which dependencies it will depend on.

Now you can cleanup your unit tests to look like this:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(true, myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

Do not include the AutoFac NuGet package in your unit test project.  It’s not needed.  Each object is isolated from all other objects.  You will still need to mock any injected objects, but the injection occurs at the constructor of each object.  All dependencies have been isolated so you can unit test with ease.

Where to Get the Code

As always, I have posted the sample code up on my GitHub account.  This project contains four different sample projects.  I would encourage you to download each sample and experiment/practice with them.  You can download the samples by following the links listed here:

  1. MockingFileSystem
  2. TightlyCoupledExample
  3. SimpleIOCContainer
  4. AutoFacIOCContainer
 

Dot Net Core Using the IOC Container

I’ve talked about Inversion Of Control in previous posts, but I’m going to go over it again.  If you’re new to IOC containers, breaking dependencies and unit testing, then this is the blog post you’ll want to read.  So let’s get started…

Basic Concept of Unit Testing

Developing and maintaining software is one of the most complex tasks ever performed by humans.  Software can grow to proportions that cannot be understood by any one person at a time.  To compound the issue of maintaining and enhancing code, there is the problem that one small change in code can affect the operation of something that seems unrelated.  Engineers that build something physical, like say a jumbo jet can identify a problem and fix it.  They usually don’t expect a problem with the wing to affect the passenger seats.  In software, all bets are off.  So there needs to be a way to test everything when a small change is made.

The reason you want to create a unit test is to put in place a tiny automatic regression test.  This test is executed every time you change code to add an enhancement.  If you change some code, the test runs and ensures that you didn’t break a feature that you already coded and tested previously.  Each time you add one feature, you add a unit test.  Eventually, you end up with a collection of unit tests covering each combination of features used by your software.  These tests ride along with your source code forever.  Ideally, you want to always regression test every piece of logic that you’ve written.  In theory this will prevent you from breaking existing code when you add a new enhancement.

To ensure that you are unit testing properly, you need to understand coverage.  Coverage is not everything, but it’s a measurement of how much of your code is covered by your unit tests and you should strive to maximize this.  There are tools that can measure this for you, though some are expensive.  One aspect of coverage that you need to be aware of is the combination “if” statement:

if (input == 'A' || input =='B')
{
    // do something
}

This is a really simple example, but your unit test suite might contain a test that feeds the character A into the input and you’ll get coverage for the inner part of the if statement.  However, you have not tested when the input is B and that input might be used by other logic in a slightly different way.  Technically, we don’t have 100% coverage.  I just want you to be aware that this issue exists and you might need to do some analysis of your code coverage when you’re creating unit tests.

One more thing about unit tests and this is very important to keep in mind.  When you deploy this software and bugs are reported, you will need to add a unit test for each bug reported.  The unit test must break your code exactly the way the bug did.  Then you fix the bug and that prevents any other developer from undoing your bug fix.  Of course, your bug fix will be followed by another unit test suite run to make sure you didn’t break any thing else.  This will help you make forward progress in your quest for bug-free or low-bug software.

Dependencies

So you’ve learned the basics of unit test writing and you’re creating objects and and putting one or more unit tests on each method.  Suddenly you run into an issue.  Your object connects to a device for input.  An example is that you read from a text file or you connect to a database to read and write data.  Your unit test should never cause files to be written or data to be written to a real database.  It’s slow, the data being written would need to be cleaned out when the test completed.  What if the tests fail?  Your test data might still be in the database.  Even if you setup a test database, you would not be able to run two versions of your unit tests at the same time (think of two developers executing their local copy of the unit test suite).

The device being used is called a dependency.  The object depends on the device and it cannot operate properly without the device.  To get around dependencies, we need to create a fake or mock database or a fake file I/O object to put in place of the real database or file I/O when we run our unit tests.  The problem is that we need to somehow tell the object under test to use the fake or mock instead of the real thing.  The object must also default to the real database or file I/O when not under test.

The current trend in breaking dependencies involves a technique called Inversion Of Control or IOC.  What IOC does is allow us to define all object create points at program startup time.  When unit tests are run, we substitute the objects that perform database and I/O functions with fakes.  Then we call our objects under test and the IOC system takes care of wiring the correct dependencies together.  Sounds easy.

IOC Container Basics

Here are the basics of how an IOC container works.  I’m going to cut out all the complications involved and keep this super simple.

First, there’s the container.  This is a dictionary of interfaces and classes that is used as a lookup.  Basically, you create your object and then you create a matching interface for your object.  When you call one object from another, you use the interface to lookup which class to call from your object.  Here’s a diagram of object A dependent on object B:

Here’s a tiny code sample:

public class A
{
  public void MyMethod()
  {
    var b = new B();

    b.DependentMethod();
    }
}

public class B
{
  public void DependentMethod()
  {
    // do something here
  }
}

As you can see, class B is created inside class A.  To break the dependency we need to create an interface for each class and add them to the container:

public interface IB
{
  void DependentMethod();
}

public interface IA
{
  void MyMethod();
}

Inside Program.cs:

var serviceProvider = new ServiceCollection()
  .AddSingleton<IB, B>()
  .AddSingleton<IA, A>()
  .BuildServiceProvider();

var a = serviceProvider.GetService<IA>();
a.MyMethod();

Then modify the existing objects to use the interfaces and provide for the injection of B into object A:

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public void MyMethod()
  {
    _b.DependentMethod();
  }
}

public class B : IB
{
  public void DependentMethod()
  {
    // do something here
  }
}

The service collection object is where all the magic occurs.  This object is filled with definitions of which interface will be matched with which class.  As you can see by the insides of class A, there is no more reference to the class B anywhere.  Only the interface is used to reference any object that is passed (called injected) into the constructor that conforms to IB (interface B).  The service collection will lookup IB and see that it needs to create an instance of B and pass that along.  When the MyMethod() is executed in A, it just calls the _b.DependendMethod() method without worrying about the actual instance of _b.  What does that do for us when we are unit testing?  Plenty.

Mocking an Object

Now I’m going to use a NuGet package called Moq.  This framework is exactly what we need because it can take an interface and create a fake object that we can apply simulated outputs to.  First, lets modify our A and B class methods to return some values:

public class B : IB
{
  public int DependentMethod()
  {
    return 5;
  }
}

public interface IB
{
  int DependentMethod();
}

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public int MyMethod()
  {
    return _b.DependentMethod();
  }
}

public interface IA
{
  int MyMethod();
}

I have purposely kept this so simple that there’s nothing being done.  As you can see, DependentMethod() just returns the number 5 in real life.  Your methods might perform a calculation and return the result, or you might have a random number generator or it’s a value read from your database.  This example just returns 5 and we don’t care about that because our mock object will return any value we want for the unit test being written.

Now the unit test using Moq looks like this:

[Fact]
public void ClassATest1()
{
    var mockedB = new Mock<IB>();
    mockedB.Setup(b => b.DependentMethod()).Returns(3);

    var a = new A(mockedB.Object);

    Assert.Equal(3, a.MyMethod());
}

The first line of the test creates a mock of object B called “mockedB”.  The next line creates a fake return for any call to the DependentMethod() method.  Next, we create an instance of class A (the real class) and inject the mocked B object into it.  We’re not using the container for the unit test because we don’t need to.  Technically, we could create a container and put the mocked B object into one of the service collection items, but this is simpler.  Keep your unit tests as simple as possible.

Now that there is an instance of class A called “a”, we can assert to test if a.MyMethod() returns 3.  If it does, then we know that the mocked object was called by object “a” instead of a real object of class A (since that always returns a 5).

Where to Get the Code

As always you can get the latest code used by this blog post at my GitHub account by clicking here.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

 

Unit Testing with Moq

Introduction

There are a lot of articles on how to use Moq, but I’m going to bring out my die roller game example to show how to use Moq to roll a sequence of predetermined results.  I’m also going to do this using .Net Core.

The Setup

My sample program is a game.  The game is actually empty, because I want to show the minimal code to demonstrate Moq itself.  So let’s pretend there is a game object and it uses a die roll object to get a random outcome.  For those who have never programmed a game before, a die roll can be used to determine offense or defense of one battle unit attacking another in a turn-based board game.  However, unit tests must be repeatable and we must make sure we test as much code as possible (maximize our code coverage).

The sample project uses a Game object that is dependent on the DieRoller object.  To break dependencies, I required an instance of the DieRoller object to be fed into the Game object’s constructor:

public class Game
{
    private IDieRoller _dieRoller;

    public Game(IDieRoller dieRoller)
    {
        _dieRoller = dieRoller;
    }

    public int Play()
    {
        return _dieRoller.DieRoll();
    }
}

Now I can feed a Moq object into the Game object and control what the die roll will be.  For the game itself, I can use the actual DieRoller object by default:

public static void Main(string[] args)
{
    var game = new Game(new DieRoller());
}

An IOC container could be used as well, and I would highly recommend it for a real project.  I’ll skip the IOC container for this blog post.

The unit test can look something like this:

[Fact]
public void test_one_die_roll()
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(2);

	var game = new Game(dieRoller.Object);
	var result = game.Play();
	Assert.Equal(2, result);
}

I’m using xunit and moq in the above example.  So for my .Net Core project.json file:

{
	"version": "1.0.0-*",
	"testRunner": "xunit",
	"dependencies": {
		"DieRollerLibrary": "1.0.0-*",
		"GameLibrary": "1.0.0-*",
		"Microsoft.NETCore.App": {
			"type": "platform",
			"version": "1.0.1"
	},
	"Moq": "4.6.38-alpha",
	"xunit": "2.2.0-beta2-build3300",
	"xunit.core": "2.2.0-beta2-build3300",
	"dotnet-test-xunit": "2.2.0-preview2-build1029",
	"xunit.runner.visualstudio": "2.2.0-beta2-build1149"
},

"frameworks": {
	"netcoreapp1.0": {
		"imports": "dnxcore50"
		}
	}
}

 

Make sure you check the versions of these packages since they are constantly changing as of this blog post.  It’s probably best to use the NuGet package window or the console to get the latest version.

Breaking Dependencies

What does Moq do?  Moq is a quick and dirty way to create a fake object instance without writing a fake object.  Moq can take an interface or object definition and create a local instance with outputs that you can control.  In the XUnit sample above, Moq is told to return the number 2 when the DieRoll() method is called.  

Why mock an object?  As you create code, you’ll end up with objects that call other objectsThese cause dependencies.  In this example, the Game object is dependent on the DieRoller object:

 

Each object should have it’s own unit tests.  If we are testing two or more objects that are connected together, then technically, we’re performing an integration test.  To break dependencies, we need all objects not under test to be faked or mocked out.  If the Game object has multiple paths (using if/then, case statements for example) that depend on the roll of the die, then we’ll need to create unit tests where we can fix the die roll to a known set of values and execute the Game object to see the expected results.

First, I’m going to add a method to the Game class that will determine the outcome of an attack.  If the die roll is greater than 4, then the attack is successful (unit is hit).  If the die roll is 4 or less, then it’s a miss.  I’ll use true for a hit and false for a miss.  Here is my new Game class:

public class Game
{
    private IDieRoller _dieRoller;

	public Game(IDieRoller dieRoller)
	{
		_dieRoller = dieRoller;
	}

	public int Play()
	{
		return _dieRoller.DieRoll();
	}
	 
	public bool Attack()
	{
		if (_dieRoller.DieRoll() > 4)
		{
			return true;
		}
		
		return false;
	}
}


Now if we define a unit test like this:

[Theory]
[InlineData(1)]
[InlineData(2)]
[InlineData(3)]
[InlineData(4)]
public void test_attack_unsuccessful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.False(result);
}


We can test all instances where the die roll should produce a false result.  To make sure we have full coverage, we’ll need to test the other two die results (where the die is a 5 or a 6):

[Theory]
[InlineData(5)]
[InlineData(6)]
public void test_attack_successful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.True(result);
}

Another Example

Now I’m going to make it complicated.  Sometimes in board games, we use two die rolls to determine an outcome.  First, I’m going to define an enum to allow three distinct results of an attack:

public enum AttackResult
{
	Miss,
	Destroyed,
	Damaged
}


Next, I’m going to create a new method named Attack2():

public AttackResult Attack2()
{
	if (_dieRoller.DieRoll() > 4)
	{
		if (_dieRoller.DieRoll() > 3)
		{
			return AttackResult.Damaged;
		}
		return AttackResult.Destroyed;
	}
	return AttackResult.Miss;
}


As you can see, the die could be rolled up to two times.  So, in order to test your results, you’ll need to fake two rolls before calling the game object.   I’m going to use the “theory” XUnit attribute to feed values that represent a damaged unit.  The values need to be the following:

5,4
5,5
5,6
6,4
6,5
6,6

Moq has a SetupSequence() method that allows us to stack predetermined results to return.  So every time the mock object is called, the next value will be returned.  Here’s the XUnit test to handle all die rolls that would result with an AttackReuslt of damaged:

[Theory]
[InlineData(5, 4)]
[InlineData(5, 5)]
[InlineData(5, 6)]
[InlineData(6, 4)]
[InlineData(6, 5)]
[InlineData(6, 6)]
public void test_attack_damaged(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Damaged, result);
}

Next, the unit testing for instances where the Attack2() method returns a AttackResult  of destroyed:

[Theory]
[InlineData(5, 1)]
[InlineData(5, 2)]
[InlineData(5, 3)]
[InlineData(6, 1)]
[InlineData(6, 2)]
[InlineData(6, 2)]
public void test_attack_destroyed(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Destroyed, result);
}

And finally, the instances where the AttackResult is a miss:

[Theory]
[InlineData(1, 1)]
[InlineData(2, 2)]
[InlineData(3, 3)]
[InlineData(4, 1)]
public void test_attack_miss(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Miss, result);
}

In the instance of the miss, the second die roll doesn’t really matter and technically, the unit test could be cut back to one input.  To test for every possible case, we could feed all six values into the second die.  Why would be do that?  Unit tests are performed for more than one reason.  Initially, they are created to prove our code as we write it.  Test-driven development is centered around this concept.  However, we also have to recognize that after the code is completed and deployed, the unit tests become regression tests.  These tests should live with the code for the life of the code.  The tests should also be incorporated into your continuous integration environment and executed every time code is checked into your version control system (technically, you should execute the tests every time you build, but your build times might be too long to do this).  This will prevent future code changes from accidentally breaking code that was already developed and tested.  In the Attack2() method a developer could enhance the code to use the second die roll when the first die roll is a 1,2,3 or 4.  The unit test above will not necessarily catch this change.  The only thing worse than a broken unit test is one that passes when it shouldn’t.

With that said, you should not have to perform an exhaustive test on every piece of code in your program.  I would only recommend such a tactic if the input data set was small enough to be reasonable.  For the example case above, the die size is 6 and the “Theory” attribute cuts the code you’ll need in order to perform multiple unit tests.  If you are using Microsoft Tests, then you can setup a loop that does the same function as the “Theory” attribute and test all iterations for one expected output in each unit test.


Where to get the Sample Code

You can download the sample code from my GitHub account by clicking here.

 

Dependency Injection and IOC Containers

Summary

I’ve done quite a few posts on unit testing in the past.  I keep a list of subjects that I would like to blog about so I have a ready list to choose from.  My list of unit testing subjects is getting large and it’s time to clear the spindle.  So in this post I’m going to do some deep diving on Dependency Injection and introduce Inversion Of Control using Autofac.

The Die Roller

I created a simple program a while back that does a die roll (you can find it by clicking here). I had hoped to write some follow up posts about other methods that can be used to get around the problem of object dependency, but other blog subjects grabbed my interest and took up my time.  So now I’m going to go back and discuss other techniques that I know in order to break or eliminate dependencies in objects.

First, I’m going to show a technique that uses a singleton.  The idea behind this design pattern is to provide a default object that will self-instantiate when it is called from the main program, but provide an entry point (a setter) that will allow the object to be overridden by a fake object in a unit test before the object under test is called.  I’ve blogged about this technique in this post where I described a technique to design a caching system. 

The base object looks like this:

public abstract class DieRollerBase
{
	public static DieRollerBase _Instance;

	public static DieRollerBase Instance
	{
		get
		{
			if (_Instance == null)
			{
				_Instance = new DieRoller();
			}

			return _Instance;
		}

		set
		{
			_Instance = value;
		}
	}

	public static int DieRoll()
	{
		return Instance.ReturnDieRoll();
	}

	public abstract int ReturnDieRoll();
}

The die roller object, which is run inside the main program looks like this:

public class DieRoller : DieRollerBase
{
	private Random RandomNumberGenerator = new Random(DateTime.Now.Millisecond);

	public override int ReturnDieRoll()
	{
		return RandomNumberGenerator.Next() % 6;
	}
}

As you can see, the base class instantiates a new DieRoller() object, instead of a DieRollerBase object.  What happens is the main program will call the die roller using the following syntax:

int result = DieRoller.DieRoll();

The call to the method DieRoll() is static, but it calls the instance method called ReturnDieRoll() which is implemented inside the sub-class, not the base class.  The reason for doing this is that we can override the DieRoll() class with a fake class like this:

public class FakeDieRoller : DieRollerBase
{
	private static int _NextDieRoll = 0;
	private static List _SetDieRoll = new List();
	public static int SetDieRoll
	{
		get
		{
			int nextDieRoll = _SetDieRoll[_NextDieRoll];
			_NextDieRoll++;
			if (_NextDieRoll >= _SetDieRoll.Count)
			{
				_NextDieRoll = 0;
			}

			return nextDieRoll;
		}
		set
		{
			_SetDieRoll.Add(value);
		}
	}

	public static void ClearDieRoll()
	{
		_SetDieRoll.Clear();
		_NextDieRoll = 0;
	}

	public override int ReturnDieRoll()
	{
		return SetDieRoll;
	}
}

Using the setter of the base class for the instance, we can do this in our unit test:

DieRoller.Instance = new FakeDieRoller();
Any method calling the die roller will end up executing the fake class instead of the default class.  The reason for doing this is so we can load the dice by stuffing “known” die roll numbers into the dice before calling our object under test.  Then we can get predictable results from objects that use the random die roll object.


Analysis

In my earlier blog post I created a die roller class that looked like this:

public static class DieRoller
{
	private static Random RandomNumberGenerator = new Random(DateTime.Now.Millisecond);

	public static int DieRoll()
	{
		if (UnitTestHelpers.IsInUnitTest)
		{
			return UnitTestHelpers.SetDieRoll;
		}
		else
		{
			return RandomNumberGenerator.Next() % 6;
		}
	}
}


The injection took place using the UnitTestHelpers object.  This tests if the startup dll was a microsoft test object and then executed a built-in fake die.  This is not a clean technique for unit testing since there is some test code compiled into the distributed dlls.  Mainly, the UnitTestHelpers.SetDieRoll method.

The singleton method is much cleaner, because the fake object can be created inside the unit test project and not distributed with the production dlls.  Therefore the final code will not contain the fake die object or any of the test code.  The problem with singletons is that they are complicated to design.

There is a better technique.  It’s called Inversion Of Control or IOC.  The idea behind inversion of control is that objects are created independent of each other, then they are “wired” together at program initialization time.  Unit tests can link fake objects before the tests are executed, which automatically bypasses the dependent objects that are not under test.  This approach is cleaner and I’m going to show the die roller using the Autofac IOC container.

Setting up the Solution

Autofac has an object called the container.  The container is like a dictionary storage place where all the classes and interfaces are stored when the program is initialized.  Then the resolve command uses the container information to match which class is setup for each interface.  Inside your class, you’ll call the resolve command and pass the interface, without any reference to the object class itself.  This allows Autofac to set which class will be used for the interface when is needed.  By doing this, we can setup a different class (like a fake class) inside a unit test and the object under test will call the resolve command with the same interface but the fake object will already be instantiated by Autofac to be used.

So here are the projects I used in my little demo program:

Container
DieRollerAutoFac
DieRollerLibrary
GameLibrary
DieRollerTests

The program itself will start from the DieRollerAutoFac project.  This is just a console application that initializes the IOC container and runs the game.  The IOC container is stored in a static class called IOCContainer and it’s inside the “Container” library.  The reason I structured it this way is so I can use the container for the program and I can use the container for the unit tests.  I also needed the container for the game class when it performs the resolve operation.  So the container must be in a different project to keep it from being dependent on the game class or the main program.
Next, I created the die roller class and interface inside it’s own project.  This could be contained inside the GameLibrary project, but I’m going to pretend that we want to isolate modules (i.e. dlls).

Next we need to wire everything up.  If you download the sample code and look at the main program, you’ll see this piece of code:

var builder = new ContainerBuilder();
builder.RegisterType<DieRoller>().As<IDieRoller>();
IOCContainer.Container = builder.Build();


This is the code that builds the container.  Once the container is built, it can be used by any object in the program.

Inside the game class the container is used to resolve the die object:

public class Game
{
	public int Play()
	{
		using (var scope = IOCContainer.Container.BeginLifetimeScope())
		{
			var die = scope.Resolve();

			return die.DieRoll();
		}
	}
}

To substitute a fake die class in your unit tests, you can do this:

var builder = new ContainerBuilder();
builder.RegisterType<FakeDieRoller>().As<IDieRoller>();
IOCContainer.Container = builder.Build();


Checklist of IOC container rules:
1. Use the NuGet manager to install Autofac.  
2. Make sure you create an interface for each object you intend to use in your IOC container.
3. Setup a container creator in your unit tests with fake or mock objects.

Where to get the code

As always I have posted all the code from this blog post.  You can download the source by clicking here


 

Mocking HttpContext – Adding Sessions

Summary

In one of my previous posts (See: Mocking HttpContext), I created a HttpContext factory and a mocked HttpContext object that can be used to simulate the HttpContext.Current object used by methods under a unit test.  In this post, I’m going to add the Session capabilities to this object so you can unit test your methods and fake or mock your session variables.

The Mock Indexer

The session indexer can be overridden by creating a class based on the HttpSessionStateBase class.  Once this is done, then it can be used the Session object of the HttpContext.  Here’s the class to override the indexer:

public class MockHttpSession : HttpSessionStateBase
{
    public SessionStateItemCollection SessionVariables = new SessionStateItemCollection();

    public override object this[string name]
    {
        get
        {
            return SessionVariables[name];
        }
        set
        {
            SessionVariables[name] = value;
        }
    }
}

You’ll have to add a “using using System.Web.SessionState;” at the top for HttpSessionStateBase to be available.

Adding to the MockHttpContext

Next, you’ll need to add another public variable to the top of the existing MockHttpContext object and then add a Setup() method to replace the Session object.  Your MockHttpContext object will look like this:

public class MockHttpContext
{
    public NameValueCollection ServerVariables = new NameValueCollection();
    public HttpCookieCollection Cookies = new HttpCookieCollection();
    public NameValueCollection HeaderVariables = new NameValueCollection();
    public MockHttpSession SessionVars = new MockHttpSession();

    public HttpContextBase Context
    {
        get
        {
            var httpRequest = new Moq.Mock<HttpRequestBase>();

            httpRequest.Setup(x => x.ServerVariables.Get(It.IsAny<string>()))
                .Returns<string>(x =>
                {
                    return ServerVariables[x];
                });

            httpRequest.SetupGet(x => x.Cookies).Returns(Cookies);

            httpRequest.Setup(x => x.Headers.Get(It.IsAny<string>()))
                .Returns<string>(x =>
                    {
                        return HeaderVariables[x];
                    }
                );

            var httpContext = (new Moq.Mock<HttpContextBase>());
            httpContext.Setup(x => x.Request).Returns(httpRequest.Object);

            httpContext.Setup(ctx => ctx.Session).Returns(SessionVars);

            return httpContext.Object;
        }
    }
}
 
You’ll need to include Moq (use NuGet to find and install), and you’ll need to include the following using statements:

using System.Collections.Specialized;
using System.Web;
using System.Web.SessionState;
using Moq;


The Unit Test

The new unit test would look roughly like this:

[TestMethod]
public void httpcontext_session()
{
    var tempContext = new MockHttpContext();
    HttpContextFactory.SetCurrentContext(tempContext.Context);

    HttpContextFactory.Current.Session[“testid“] = “test data“;

    //TODO: call http method under test

    Assert.AreEqual(“test data“, HttpContextFactory.Current.Session[“testid“]);
}

This unit test uses the same HttpContextFactory as shown in my previous blog post.  The entire working code can be found on my GitHub account.

Where to Get the Code

As usual, you can go to my GitHub account and download the entire project by clicking here.

 
 
 

 

Mocking HttpContext

Summary

Anybody who has attempted writing unit tests for a website has run into the problem where you cannot mock the HttpContext object.  I usually write my code in such a way that I don’t need to mock the context by using only connection code to go between the website controller and the actual business logic.  In this blog post, I’m going to show how to mock up parts of the HttpContext so you can test things like headers, cookies and server variables.  From this, you should be able to extend the features to mock up any other part of the context.

The Problem

The HttpContext has been called the largest object on the planet.  Unfortunately, HttpContext is difficult to mock and an object called HttpContextBase (inside System.Web) was added to allow developers to mock the context.  In order to do this, you’ll need to create a context factor that your application uses.  This factory will default to the HttpContext.Current when your program runs normally, but allow your unit tests to override with an HttpContextBase for testing purposes.  This also requires you to replace any references to HttpContext.Current with the new context factory object.

The Factory Object

The basic factory object looks like this:

public class HttpContextFactory
{
    private static HttpContextBase m_context;
    public static HttpContextBase Current
    {
        get
        {
            if (m_context != null)
            {
                return m_context;
            }

            if (HttpContext.Current == null)
            {
                throw new InvalidOperationException(“HttpContext not available“);
            }

            return new HttpContextWrapper(HttpContext.Current);
        }
    }

    public static void SetCurrentContext(HttpContextBase context)
    {
        m_context = context;
    }
}

Now you refactor all your code to use HttpContextFactory.Current in place of HttpContext.Current.  Once this has been accomplished, then you can create unit tests that mock the context.  This factory object was obtained from stack overflow (click here).

The Mock Context Object

Next, we’ll need a mock context object.  This will be used to contain variables that are pre-assigned to the context before a method is called.  Then the mock object can be asserted if the object under test changes any values (like setting a cookie).

Here’s the mock object:

public class MockHttpContext
{
    public NameValueCollection ServerVariables = new NameValueCollection();
    public HttpCookieCollection Cookies = new HttpCookieCollection();
    public NameValueCollection HeaderVariables = new NameValueCollection();

    public HttpContextBase Context
    {
        get
        {
            var httpRequest = new Moq.Mock<HttpRequestBase>();

            httpRequest.Setup(x => x.ServerVariables.Get(It.IsAny<string>()))
                .Returns<string>(x =>
                {
                    return ServerVariables[x];
                });

            httpRequest.SetupGet(x => x.Cookies).Returns(Cookies);

            httpRequest.Setup(x => x.Headers.Get(It.IsAny<string>()))
                .Returns<string>(x =>
                    {
                        return HeaderVariables[x];
                    }
                );

            var httpContext = (new Moq.Mock<HttpContextBase>());
            httpContext.Setup(x => x.Request).Returns(httpRequest.Object);

            return httpContext.Object;
        }
    }
}

There are variables to contain cookies, header variables and server variables.  These can be set by a unit test before executing the method under test.  The values can also be read after the method has executed to verify expected changes.

 
Writing a Unit Test

 Here’s a basic unit test with server variables:

[TestMethod]
public void httpcontext_server_variables()
{
    var tempContext = new MockHttpContext();
    tempContext.ServerVariables.Add(“REMOTE_ADDR”, “127.0.0.1“);
    tempContext.ServerVariables.Add(“HTTP_USER_AGENT“, “user agent string here“);
    tempContext.ServerVariables.Add(“HTTP_X_FORWARDED_FOR“, “12.13.14.15“);

    HttpContextFactory.SetCurrentContext(tempContext.Context);

    //TODO: call http method
    //TODO: asserts

}

That’s all there is to it.  Now you’re probably wondering what’s the point?  The reason for mocking the context is to get legacy code in a unit testing harness.  If your legacy code is already in C# and you are using web forms or MVC, then you can use this technique to unit test any methods called from your web application.  The process is to create a unit test with minimal changes to existing code.  Then perform your refactoring or rewriting of code while applying the same unit tests.  This will help ensure that you are following the original spec of the legacy code.


Where to Get the Code

You can download the sample code from my GitHub account by clicking here.  This code was built with Visual Studio 2012.

 

Data Caching with Redis and C#

Summary

Caching is a very large subject and I’m only going to dive into a small concept that uses Redis, an abstract caching class, and a dummy caching class for unit testing and show how to put it all together for a simple but powerful caching system.

Caching Basics

The type of caching I’m talking about in this blog post involves the caching of data from a database that is queried repetitively.  The idea is that you would write a query to read your data and return the results to a web site, or an application that is under heavy load.  The data being queried might be something that is used as a look-up, or maybe it’s a sales advertisement that everybody visiting your website is going to see.  

The data request should check to see if the data results are already in the cache first.  If they are not, then read the data from the database and copy to the cache and then return the results.  After the first time this data is queried, the results will be in the cache and all subsequent queries will retreive the results from the cache.  One trick to note is that the cached results name needs to be unique to the data set being cached, otherwise, you’ll get a conflict.

Redis

Redis is free, powerful and there is a lot of information about this caching system.  Normally, you’ll install Redis on a Linux machine and then connect to that machine from your website software.  For testing purposes, you can use the windows version of Redis, by downloading this package at GitHub (click here).   Once you download the Visual Studio project, you can build the project and there should be a directory named “x64”.  You can also download the MSI file from here.  Then you can install and run it directly.

Once the Redis server is up and running you can download the stack exchange redis client software for C#.  You’ll need to use “localhost:6379” for your connection string (assuming you left the default port of 6379, when you installed Redis).


Caching System Considerations

First, we want to be able to unit test our code without the unit tests connecting to Redis.  So we’ll need to be able to run a dummy Redis cache when we’re unit testing any method that includes a call to caching.

Second we’ll need to make sure that if the Redis server fails, then we can still run our program.  The program will hit the database every time and everything should run slower than with Redis running (otherwise, what’s the point), but it should run.

Third, we should abstract our caching class so that we can design another class that uses a different caching system besides Redis.  An example Windows caching system we could use instead is Memcached.

Last, we should use delegates to feed the query or method call to the cache get method, then we can use our get method like it’s a wrapper around our existing query code.  This is really convenient if we are adding caching to a legacy system, since we can leave the queries in place and just add the cache get wrappers.


CacheProvider Class

The CacheProvider class will be an abstract class that will be setup as a singleton pattern with the instance pointing to the default caching system.  In this case the RedisCache class (which I haven’t talked about yet).  The reason for this convoluted setup, is that we will use the CacheProvider class inside our program and ignore the instance creation.  This will cause the CacheProvider to use the RedisCache class implementation.  For unit tests, we’ll override the CacheProvider instance in the unit test using the BlankCache class (which I have not talked about yet).

Here’s the CacheProvider code:

public abstract class CacheProvider
{
    public static CacheProvider _instance;
    public static CacheProvider Instance
    {
        get
        {
            if (_instance == null)
            {
                _instance = new RedisCache();
            }
            return _instance;
        }
        set { _instance = value; }
    }

    public abstract T Get<T>(string keyName);
    public abstract T Get<T>(string keyName, Func<T> queryFunction);
    public abstract void Set(string keyName, object data);
    public abstract void Delete(string keyName);
}

I’ve provided methods to save data to the cache (Set), read data directly from the cache (Get) and a delete method to remove an item from the cache (Delete).  I’m only going to talk about the Get method that involves the delegate called “queryFunction”.


RedisCache Class

There is a link to download the full sample at the end of this blog post, but I’m going to show some sample snippets here.  The first is the Redis implementation of the Get.  First, you’ll need to add the Stack Exchange Redis client using NuGet.  Then you can use the connect to the Redis server and read/write values.

The Get method looks like this:

public override T Get<T>(string keyName, Func<T> queryFunction)
{
    byte[] data = null;

    if (redis != null)
    {
        data = db.StringGet(keyName);
    }

    if (data == null)
    {
        var result = queryFunction();

        if (redis != null)
        {
            db.StringSet(keyName, Serialize(result));
        }

        return result;
    }

    return Deserialize<T>(data);
}

The first thing that happens is the StringGet() method is called.  This is the Redis client read method.  This will only occur if the value of redis is not equal to null.  The redis value is the connection multiplexer connection that occurs when the instance is first created.  If this fails, then all calls to Redis will be skipped.

After an attempt to read from Redis is made, then the variable named data is checked for null.  If the read from Redis is successful, then there should be something in “data” and that will need to be deserialized and returned.  If this is null, then the data is not cached and we need to call the delegate function to get results from the database and save that in the cache.

The call to StringSet() is where the results of the delegate are saved to the Redis server.  In this instanced, the delegate is going to return the results we want (already deserialized).  So we need to serialize it when we send it to Redis, but we can just return the results from the delegate result.

The last return is the return that will occur if we were able to get the results from Redis in the first place.  If both the Redis and the database servers are down, then this method will fail, but the app will probably fail anyway.  You could include try/catch blocks to handle instances where the delegate fails, assuming you can recover in your application if your data doesn’t come back from the database server and it’s not cached already.

You can look at the serialize and deserialize methods in the sample code.  In this instanced I serialized the data into a binary format.  You can also serialize to JSON if you prefer.  Just replace the serialize and deserialize methods with your own code.


Using the RedisCache Get Method

There are two general ways to use the Get method: Generic or Strict.  Here’s the Generic method:

var tempData = CacheProvider.Instance.Get(“SavedQuery“, () =>
{
    using (var db = new SampleDataContext())
    {
        return (from r in db.Rooms select r).ToList();
    }
});



For strict:

for (int i = 0; i < iternations; i++)
{
    List<Room> tempData = CacheProvider.Instance.Get<List<Room>>(“SavedQuery2“, () =>
    {
        using (var db = new SampleDataContext())
        {
            return (from r in db.Rooms select r).ToList();
        }
    });
}


In these examples you can see the LINQ query with a generic database using statement wrapping the query.  This sample was coded in Entity Framework 6 using Code-First.  The query is wrapped in a function wrapper using the “() => {}” syntax.  You can do this with any queries that you already have in place and just make sure the result set is set to return from this.  The tempData variable will contain the results of your query.


Using the BlankCache Class

There are two different ways you could implement a dummy cache class.  In one method, you would provide a Get() method that skips the caching part and always returns the result of the delegate.  You would be implementing an always miss cache class.  

The other method is to simulate a caching system by using a dictionary object to store the cached data and implement the BlankCache class to mimic the Redis server cache without a connection.  In this implementation we making sure our code under test will behave properly if a cache system exists and we’re not concerned about speed per-say.  This method could have a negative side-effect if your results are rather large, but for unit testing purposes you should not be accessing large results.

In either BlankCache implementation, we are not testing the caching system.  The purpose is to use this class for unit testing other objects in the system.

A snippet of the BlankCache class is shown here:

public class BlankCache : CacheProvider
{
    // This is a fake caching system used to fake out unit tests
    private Dictionary<string, byte[]> _localStore = new Dictionary<string, byte[]>();

    public override T Get<T>(string keyName, Func<T> queryFunction)
    {
        if (_localStore.ContainsKey(keyName))
        {
            return Deserialize<T>(_localStore[keyName]);
        }
        else
        {
            var result = queryFunction();
            _localStore[keyName] = Serialize(result);
            return result;
        }
    }
}

As you can see, I used a dictionary to store byte[] data and used the same serialize and deserialize methods that I used with Redis.  I also simplified the Get method, since I know that I will always get a connection to the fake cache system (aka the Dictionary).

When using the CacheProvider from a unit test you can use this syntax:

CacheProvider.Instance = new BlankCache();

That will cause the singleton instance to point to the BlankCache class instead of Redis.


Getting the Sample Code

You can download the sample code from my GitHub account by clicking here.  Make sure you search for “<your data server name here>” and replace with the name of your SQL server database name (this is in the TestRedis project under the DAL folder inside the SampleDataContext.cs file).

If you didn’t create the ApiUniversity demo database from any previous blog posts, you can create an empty database, then copy the sql code from the CreateApiUniversityDatabaseObjects.sql file included in the Visual Studio code.  The sample code was written in VS2013, but it should work in VS2012 as well.