Creating POCOs in .Net Core 2.0

Summary

I’ve shown how to generate POCOs (Plain Old C# Objects) using the scaffold tool for .Net Core 1 in an earlier post.  Now I’m going to show how to do it in Visual Studio 2017 with Core 2.0.

Install NuGet Packages

First, you’ll need to install the right NuGet Packages.  I prefer to use the command line because I’ve been doing this so long that my fingers type the command without me thinking about it.  If you’re not comfortable with the command line NuGet window, you can use the NuGet Package Manager Settings window under the project you want to create your POCOs in.  If you want, you can copy the commands here and paste them into the NuGet Package Manager Console window.  Follow these instructions:

  1. Create a .Net Core 2.0 library project in Visual Studio 2017.
  2. Type or copy and paste the following NuGet commands into the Nuget Package Manager Console window:
install-package Microsoft.EntityFrameworkCore.SqlServer
install-package Microsoft.EntityFrameworkCore.Tools
install-package Microsoft.EntityFrameworkCore.Tools.DotNet

If you open up your NuGet Dependencies treeview, you should see the following:

Execute the Scaffold Command

In the same package manager console window use the following command to generate your POCOs:

Scaffold-DbContext "Data Source=YOURSQLINSTANCE;Initial Catalog=DATABASENAME;Integrated Security=True" Microsoft.EntityFrameworkCore.SqlServer -OutputDir POCODirectory

You’ll need to update the datasource and initial catalog to point to your database.  If the command executes without error, then you’ll see a directory named “POCODirectory” that contains cs files for each table in the database you just converted.  There will also be a context that contains all the model builder entity mappings.  You can use this file “as-is” or you can split the mappings into individual files.

My process consists of generating these files in a temporary project, followed by copying each table POCO that I want to use in my project.  Then I copy the model builder mappings for each table that I use in my project.

What This Does not Cover

Any views, stored procedures or functions that you want to access with Entity Framework will not show up with this tool.  You’ll still need to create the result POCO for views, stored procedures and functions by hand (or find a custom tool).  Using EF with stored procedures is not recommended.  Anyone who has to deal with legacy code and legacy database will run into a situation where they will need to interface with an existing stored procedure.

 

XML Serialization

Summary

In this post I’m going to demonstrate the proper way to serialize XML and setup unit tests using xUnit and .Net Core.  I will also be using Visual Studio 2017.

Generating XML

JSON is rapidly taking over as the data encoding standard of choice.  Unfortunately, government agencies are decades behind the technology curve and XML is going to be around for a long time to come.  One of the largest industries industries still using XML for a majority of their data transfer encoding is the medical industry.  Documents required by meaningful use are mostly encoded in XML.  I’m not going to jump into the gory details of generating a CCD.  Instead, I’m going to keep this really simple.

First, I’m going to show a method of generating XML that I’ve seen many times.  Usually coded by a programmer with little or no formal education in Computer Science.  Sometimes programmers just take a short-cut because it appears to be the simplest way to get the product out the door.  So I’ll show the technique and then I’ll explain why it turns out that this is a very poor way of designing an XML generator.

Let’s say for instance we wanted to generate XML representing a house.  First we’ll define the house as a record that can contain square footage.  That will be the only data point assigned to the house record (I mentioned this was going to be simple right).  Inside of the house record will be lists of walls and lists of roofs (assume a house could have two or more roofs like a tri-level configuration).  Next, I’m going to make a list of windows for the walls.  The window block will have a “Type” that is a free-form string input and the roof block will also have a “Type” that is a free-form string.  That is the whole definition.

public class House
{
  public List Walls = new List();
  public List Roofs = new List();
  public int Size { get; set; }
}

public class Wall
{
  public List Windows { get; set; }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

The “easy” way to create XML from this is to use the StringBuilder and just build XML tags around the data in your structure.  Here’s a sample of the possible code that a programmer might use:

public class House
{
  public List<Wall> Walls = new List<Wall>();
  public List<Roof> Roofs = new List<Roof>();
  public int Size { get; set; }

  public string Serialize()
  {
    var @out = new StringBuilder();

    @out.Append("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
    @out.Append("<House xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\">");

    foreach (var wall in Walls)
    {
      wall.Serialize(ref @out);
    }

    foreach (var roof in Roofs)
    {
      roof.Serialize(ref @out);
    }

    @out.Append("<size>");
    @out.Append(Size);
    @out.Append("</size>");

    @out.Append("</House>");

    return @out.ToString();
  }
}

public class Wall
{
  public List<Window> Windows { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    if (Windows == null || Windows.Count == 0)
    {
      @out.Append("<wall />");
      return;
    }

    @out.Append("<wall>");
    foreach (var window in Windows)
    {
      window.Serialize(ref @out);
    }
    @out.Append("</wall>");
  }
}

public class Window
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<window>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</window>");
  }
}

public class Roof
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<roof>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</roof>");
  }
}

The example I’ve given is a rather clean example.  I have seen XML generated with much uglier code.  This is the manual method of serializing XML.  One almost obvious weakness is that the output produced is a straight line of XML, which is not human-readable.  In order to allow human readable XML output to be produced with an on/off switch, extra logic will need to be incorporated that would append the newline and add tabs for indents.  Another problem with this method is that it contains a lot of code that is unnecessary.  One typo and the XML is incorrect.  Future editing is hazardous because tags might not match up if code is inserted in the middle and care is not taken to test such conditions.  Unit testing something like this is an absolute must.

The easy method is to use the XML serializer.  To produce the correct output, it is sometimes necessary to add attributes to properties in objects to be serialized.  Here is the object definition that produces the same output:

public class House
{
  [XmlElement(ElementName = "wall")]
  public List Walls = new List();

  [XmlElement(ElementName = "roof")]
  public List Roofs = new List();

  [XmlElement(ElementName = "size")]
  public int Size { get; set; }
}

public class Wall
{
  [XmlElement(ElementName = "window")]
  public List Windows { get; set; }

  public bool ShouldSerializenullable()
  {
    return Windows == null;
  }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

In order to serialize the above objects into XML, you use the XMLSerializer object:

public static class CreateXMLData
{
  public static string Serialize(this House house)
  {
    var xmlSerializer = new XmlSerializer(typeof(House));

    var settings = new XmlWriterSettings
    {
      NewLineHandling = NewLineHandling.Entitize,
      IndentChars = "\t",
      Indent = true
    };

    using (var stringWriter = new Utf8StringWriter())
    {
      var writer = XmlWriter.Create(stringWriter, settings);
      xmlSerializer.Serialize(writer, house);

      return stringWriter.GetStringBuilder().ToString();
    }
  }
}

You’ll also need to create a Utf8StringWriter Class:

public class Utf8StringWriter : StringWriter
{
  public override Encoding Encoding
  {
    get { return Encoding.UTF8; }
  }
}

Unit Testing

I would recommend unit testing each section of your XML.  Test with sections empty as well as containing one or more items.  You want to make sure you capture instances of null lists or empty items that should not generate XML output.  If there are any special attributes, make sure that the XML generated matches the specification.  For my unit testing, I stripped newlines and tabs to compare with a sample XML file that is stored in my unit test project.  As a first-attempt, I created a helper for my unit tests:

public static class XmlResultCompare
{
  public static string ReadExpectedXml(string expectedDataFile)
  {
    var assembly = Assembly.GetExecutingAssembly();
    using (var stream = assembly.GetManifestResourceStream(expectedDataFile))
    {
      using (var reader = new StreamReader(stream))
      {
        return reader.ReadToEnd().RemoveWhiteSpace();
      }
    }
  }

  public static string RemoveWhiteSpace(this string s)
  {
    s = s.Replace("\t", "");
    s = s.Replace("\r", "");
    s = s.Replace("\n", "");
  return s;
  }
}

If you look carefully, I ‘m compiling my xml test data right into the unit test dll.  Why am I doing that?  The company that I work for as well as most serious companies use continuous integration tools such as a build server.  The problem with a build server is that your files might not make it to the same directory location on the build server that they are on your PC.  To ensure that the test files are there, compile them into the dll and reference them from the namespace using Assembly.GetExecutingAssembly().  To make this work, you’ll have to mark your xml test files as an Embedded Resource (click on the xml file and change the Build Action property to Embedded Resource).  To access the files, which are contained in a virtual directory called “TestData”, you’ll need to use the name space, the virtual directory and the full file name:

XMLCreatorTests.TestData.XMLHouseOneWallOneWindow.xml

Now for a sample unit test:

[Fact]
public void TestOneWallNoWindow()
{
  // one wall, no windows
  var house = new House { Size = 2000 };
  house.Walls.Add(new Wall());

  Assert.Equal(XmlResultCompare.ReadExpectedXml("XMLCreatorTests.TestData.XMLHouseOneWallNoWindow.xml"), house.Serialize().RemoveWhiteSpace());
}

Notice how I filled in the house object with the size and added one wall.  The ReadExpectedXml() method will remove whitespaces automatically, so it’s important to remove them off the serialized version of house in order to match.

Where to Get the Code

As always you can go to my GitHub account and download the sample application (click here).  I would recommend downloading the application and modifying it as a test to see how all the piece work.  Add a unit test to see if you can match your expected xml with the xml serializer.

 

 

 

Mocking Your File System

Introduction

In this post, I’m going to talk about basic dependency injection and mocking a method that is used to access hardware.  The method I’ll be mocking is the System.IO.Directory.Exists().

Mocking Methods

One of the biggest headaches with unit testing is that you have to make sure you mock any objects that your method under test is calling.  Otherwise your test results could be dependent on something you’re not really testing.  As an example for this blog post, I will show how to apply unit tests to this very simple program:

class Program
{
    static void Main(string[] args)
    {
        var myObject = new MyClass();
        Console.WriteLine(myObject.MyMethod());
        Console.ReadKey();
    }
}

The object that is used above is:

public class MyClass
{
    public int MyMethod()
    {
        if (System.IO.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

Now, we want to create two unit tests to cover all the code in the MyMethod() method.  Here’s an attempt at one unit test:

[TestMethod]
public void test_temp_directory_exists()
{
    var myObject = new MyClass();
    Assert.AreEqual(3, myObject.MyMethod());
}

The problem with this unit test is that it will pass if your computer contains the c:\temp directory.  If your computer doesn’t contain c:\temp, then it will always fail.  If you’re using a continuous integration environment, you can’t control if the directory exists or not.  To compound the problem you really need test both possibilities to get full test coverage of your method.  Adding a unit test to cover the case where c:\temp to your test suite would guarantee that one test would pass and the other fail.

The newcomer to unit testing might think: “I could just add code to my unit tests to create or delete that directory before the test runs!”  Except, that would be a unit test that modifies your machine.  The behavior would destroy anything you have in your c:\temp directory if you happen to use that directory for something.  Unit tests should not modify anything outside the unit test itself.  A unit test should never modify database data.  A unit test should not modify files on your system.  You should avoid creating physical files if possible, even temp files because temp file usage will make your unit tests slower.

Unfortunately, you can’t just mock System.IO.Directory.Exists().  The way to get around this is to create a wrapper object, then inject the object into MyClass and then you can use Moq to mock your wrapper object to be used for unit testing only.  Your program will not change, it will still call MyClass as before.  Here’s the wrapper object and an interface to go with it:

public class FileSystem : IFileSystem
{
  public bool DirectoryExists(string directoryName)
  {
    return System.IO.Directory.Exists(directoryName);
  }
}

public interface IFileSystem
{
    bool DirectoryExists(string directoryName);
}

Your next step is to provide an injection point into your existing class (MyClass).  You can do this by creating two constructors, the default constructor that initializes this object for use by your method and a constructor that expects a parameter of IFileSystem.  The constructor with the IFileSystem parameter will only be used by your unit test.  That is where you will pass along a mocked version of your filesystem object with known return values.  Here are the modifications to the MyClass object:

public class MyClass
{
    private readonly IFileSystem _fileSystem;

    public MyClass(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public MyClass()
    {
        _fileSystem = new FileSystem();
    }

    public int MyMethod()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

This is the point where your program should operate as normal.  Notice how I did not need to modify the original call to MyClass that occurred at the “Main()” of the program.  The MyClass() object will create a IFileSystem wrapper instance and use that object instead of calling System.IO.Directory.Exists().  The result will be the same.  The difference is that now, you can create two unit tests with mocked versions of IFileSystem in order to test both possible outcomes of the existence of “c:\temp”.  Here is an example of the two unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(3, myObject.MyMethod());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(5, myObject.MyMethod());
}

Make sure you include the NuGet package for Moq.  You’ll notice that in the first unit test, we’re testing MyClass with a mocked up version of a system where “c:\temp” exists.  In the second unit test, the mock returns false for the directory exists check.

One thing to note: You must provide a matching input on x.DirectoryExists() in the mock setup.  If it doesn’t match what is used in the method, then you will not get the results you expect.  In this example, the directory being checked is hard-coded in the method and we know that it is “c:\temp”, so that’s how I mocked it.  If there is a parameter that is passed into the method, then you can mock some test value, and pass the same test value into your method to make sure it matches (the actual test parameter doesn’t matter for the unit test, only the results).

Using an IOC Container

This sample is setup to be extremely simple.  I’m assuming that you have existing .Net legacy code and you’re attempting to add unit tests to the code.  Normally, legacy code is hopelessly un-unit testable.  In other words, it’s usually not worth the effort to apply unit tests because of the tightly coupled nature of legacy code.  There are situations where legacy code is not too difficult to add unit testing.  This can occur if the code is relatively new and the developer(s) took some care in how they built the code.  If you are building new code, you can use this same technique from the beginning, but you should also plan your entire project to use an IOC container.  I would not recommend refactoring an existing project to use an IOC container.  That is a level of madness that I have attempted more than once with many man-hours of wasted time trying to figure out what is wrong with the scoping of my objects.

If your code is relatively new and you have refactored to use contructors as your injection points, you might be able to adapt to an IOC container.  If you are building your code from the ground up, you need to use an IOC container.  Do it now and save yourself the headache of trying to figure out how to inject objects three levels deep.  What am I talking about?  Here’s an example of a program that is tightly coupled:

class Program
{
    static void Main(string[] args)
    {
        var myRootClass = new MyRootClass();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}
public class MyRootClass
{
  readonly ChildClass _childClass = new ChildClass();

  public bool CountExceeded()
  {
    if (_childClass.TotalNumbers() > 5)
    {
        return true;
    }
    return false;
  }

  public void Increment()
  {
    _childClass.IncrementIfTempDirectoryExists();
  }
}

public class ChildClass
{
    private int _myNumber;

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (System.IO.Directory.Exists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

The example code above is very typical legacy code.  The “Main()” calls the first object called “MyRootClass()”, then that object calls a child class that uses System.IO.Directory.Exists().  You can use the previous example to unit test the ChildClass for examples when c:\temp exist and when it doesn’t exist.  When you start to unit test MyRootClass, there’s a nasty surprise.  How to you inject your directory wrapper into that class?  If you have to inject class wrappers and mocked classes of every child class of a class, the constructor of a class could become incredibly large.  This is where IOC containers come to the rescue.

As I’ve explained in other blog posts, an IOC container is like a dictionary of your objects.  When you create your objects, you must create a matching interface for the object.  The index of the IOC dictionary is the interface name that represents your object.  Then you only call other objects using the interface as your data type and ask the IOC container for the object that is in the dictionary.  I’m going to make up a simple IOC container object just for demonstration purposes.  Do not use this for your code, use something like AutoFac for your IOC container.  This sample is just to show the concept of how it all works.  Here’s the container object:

public class IOCContainer
{
  private static readonly Dictionary<string,object> ClassList = new Dictionary<string, object>();
  private static IOCContainer _instance;

  public static IOCContainer Instance => _instance ?? (_instance = new IOCContainer());

  public void AddObject<T>(string interfaceName, T theObject)
  {
    ClassList.Add(interfaceName,theObject);
  }

  public object GetObject(string interfaceName)
  {
    return ClassList[interfaceName];
  }

  public void Clear()
  {
    ClassList.Clear();
  }
}

This object is a singleton object (global object) so that it can be used by any object in your project/solution.  Basically it’s a container that holds all pointers to your object instances.  This is a very simple example, so I’m going to ignore scoping for now.  I’m going to assume that all your objects contain no special dependent initialization code.  In a real-world example, you’ll have to analyze what is initialized when your objects are created and determine how to setup the scoping in the IOC container.  AutoFac has options of when the object will be created.  This example creates all the objects before the program starts to execute.  There are many reasons why you might not want to create an object until it’s actually used.  Keep that in mind when you are looking at this simple example program.

In order to use the above container, we’ll need to use the same FileSystem object and interface from the prevous program.  Then create an interface for MyRootObject and ChildObject.  Next, you’ll need to go through your program and find every location where an object is instantiated (look for the “new” command).  Replace those instances like this:

public class ChildClass : IChildClass
{
    private int _myNumber;
    private readonly IFileSystem _fileSystem = (IFileSystem)IOCContainer.Instance.GetObject("IFileSystem");

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

Instead of creating a new instance of FileSystem, you’ll ask the IOC container to give you the instance that was created for the interface called IFileSystem.  Notice how there is no injection in this object.  AutoFac and other IOC containers have facilities to perform constructor injection automatically.  I don’t want to introduce that level of complexity in this example, so for now I’ll just pretend that we need to go to the IOC container object directly for the main program as well as the unit tests.  You should be able to see the pattern from this example.

Once all your classes are updated to use the IOC container, you’ll need to change your “Main()” to setup the container.  I changed the Main() method like this:

static void Main(string[] args)
{
    ContainerSetup();

    var myRootClass = (IMyRootClass)IOCContainer.Instance.GetObject("IMyRootClass");
    myRootClass.Increment();

    Console.WriteLine(myRootClass.CountExceeded());
    Console.ReadKey();
}

private static void ContainerSetup()
{
    IOCContainer.Instance.AddObject<IChildClass>("IChildClass",new ChildClass());
    IOCContainer.Instance.AddObject<IMyRootClass>("IMyRootClass",new MyRootClass());
    IOCContainer.Instance.AddObject<IFileSystem>("IFileSystem", new FileSystem());
}

Technically the MyRootClass object does not need to be included in the IOC container since no other object is dependent on it.  I included it to demonstrate that all objects should be inserted into the IOC container and referenced from the instance in the container.  This is the design pattern used by IOC containers.  Now we can write the following unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(true,myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

In these unit tests, we put the mocked up object used by the object under test into the IOC container.  I have provided a “Clear()” method to reset the IOC container for the next test.  When you use AutoFac or other IOC containers, you will not need the container object in your unit tests.  That’s because IOC containers like the one built into .Net Core and AutoFac use the constructor of the object to perform injection automatically.  That makes your unit tests easier because you just use the constructor to inject your mocked up object and test your object.  Your program uses the IOC container to magically inject the correct object according to the interface used by your constructor.

Using AutoFac

Take the previous example and create a new constructor for each class and pass the interface as a parameter into the object like this:

private readonly IFileSystem _fileSystem;

public ChildClass(IFileSystem fileSystem)
{
    _fileSystem = fileSystem;
}

Instead of asking the IOC container for the object that matches the interface IFileSystem, I have only setup the object to expect the fileSystem object to be passed in as a parameter to the class constructor.  Make this change for each class in your project.  Next, change your main program to include AutoFac (NuGet package) and refactor your IOC container setup to look like this:

static void Main(string[] args)
{
    IOCContainer.Setup();

    using (var myLifetime = IOCContainer.Container.BeginLifetimeScope())
    {
        var myRootClass = myLifetime.Resolve<IMyRootClass>();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}

public static class IOCContainer
{
    public static IContainer Container { get; set; }

    public static void Setup()
    {
        var builder = new ContainerBuilder();

        builder.Register(x => new FileSystem())
            .As<IFileSystem>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new ChildClass(x.Resolve<IFileSystem>()))
            .As<IChildClass>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new MyRootClass(x.Resolve<IChildClass>()))
            .As<IMyRootClass>()
            .PropertiesAutowired()
            .SingleInstance();

        Container = builder.Build();
    }
}

I have ordered the builder.Register command from innner most to the outer most object classes.  This is not really necessary since the resolve will not occur until the IOC container is called by the object to be used.  In other words, you can define the MyRootClass first, followed by FileSystem and ChildClass, or in any order you want.  The Register command is just storing your definition of which physical object will be represented by each interface and which dependencies it will depend on.

Now you can cleanup your unit tests to look like this:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(true, myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

Do not include the AutoFac NuGet package in your unit test project.  It’s not needed.  Each object is isolated from all other objects.  You will still need to mock any injected objects, but the injection occurs at the constructor of each object.  All dependencies have been isolated so you can unit test with ease.

Where to Get the Code

As always, I have posted the sample code up on my GitHub account.  This project contains four different sample projects.  I would encourage you to download each sample and experiment/practice with them.  You can download the samples by following the links listed here:

  1. MockingFileSystem
  2. TightlyCoupledExample
  3. SimpleIOCContainer
  4. AutoFacIOCContainer
 

.Net MVC Project with AutoFac, SQL and Redis Cache

Summary

In this blog post I’m going to demonstrate a simple .Net MVC project that uses MS SQL server to access data.  Then I’m going to show how to use Redis caching to cache your results to reduce the amount of traffic hitting your database.  Finally, I’m going to show how to use the AutoFac IOC container to tie it all together and how you can leverage inversion of control to to break dependencies and unit test your code.

AutoFac

The AutoFac IOC container can be added to any .Net project using the NuGet manager.  For this project I created an empty MVC project and added a class called AutofacBootstrapper to the App_Start directory.  The class contains one static method called Run() just to keep it simple.  This class contains the container builder setup that is described in the instructions for AutoFac Quick Start: Quick Start.

Next, I added .Net library projects to my solution for the following purposes:

BusinessLogic – This will contain the business classes that will be unit tested.  All other projects will be nothing more than wire-up logic.

DAC – Data-tier Application.

RedisCaching – Redis backed caching service.

StoreTests – Unit testing library

I’m going to intentionally keep this solution simple and not make an attempt to break dependencies between dlls.  If you want to break dependencies between modules or dlls, you should create another project to contain your interfaces.  For this blog post, I’m just going to use the IOC container to ensure that I don’t have any dependencies between objects so I can create unit tests.  I’m also going to make this simple by only providing one controller, one business logic method and one unit test.

Each .Net project will contain one or more objects and each object that will be referenced in the IOC container must use an interface.  So there will be the following interfaces:

IDatabaseContext – The Entity Framework database context object.

IRedisConnectionManager – The Redis connection manager provides a pooled connection to a redis server.  I’ll describe how to install Redis for windows so you can use this.

IRedisCache – This is the cache object that will allow the program to perform caching without getting into the ugly details of reading and writing to Redis.

ISalesProducts – This is the business class that will contain one method for our controller to call.

Redis Cache

In the sample solution there is a project called RedisCaching.  This contains two classes: RedisConnectionManager and RedisCache.  The connection manager object will need to be setup in the IOC container first.  That needs the Redis server IP address, which would normally be read from a config file.  In the sample code, I fed the IP address into the constructor at the IOC container registration stage.  The second part of the redis caching is the actual cache object.  This uses the connection manager object and is setup in the IOC container next, using the previously registered connection manager as a paramter like this:

builder.Register(c => new RedisConnectionManager("127.0.0.1"))
    .As<IRedisConnectionManager>()
    .PropertiesAutowired()
    .SingleInstance();

builder.Register(c => new RedisCache(c.Resolve<IRedisConnectionManager>()))
    .As<IRedisCache>()
    .PropertiesAutowired()
    .SingleInstance();

In order to use the cache, just wrap your query with syntax like this:

return _cache.Get("ProductList", 60, () =>
{
  return (from p in _db.Products select p.Name);
});

The code between the { and } represents the normal EF linq query.  This must be returned to the anonymous function call: ( ) =>

The cache key name in the example above is “ProductList” and it will stay in the cache for 60 minutes.  The _cache.Get() method will check the cache first, if the data is there, then it returns the data and moves on.  If the data is not in the cache, then it calls the inner function, causing the EF query to be executed.  The result of the query is then saved to the cache server and then the result is returned.  This guarantees that the next query in less than 60 minutes will be in the cache for direct retrieval.  If you dig into the Get() method code you’ll notice that there are multiple try/catch blocks that will error out if the Redis server is down.  For a situation where the server is down, the inner query will be executed and the result will be returned.  In a production situation your system would run a bit slower and you’ll notice your database is working harder, but the system keeps running.

A precompiled version of Redis for Windows can be downloaded from here: Service-Stack Redis.  Download the files into a directory on your computer (I used C:\redis) then you can open a command window and navigate into your directory and use the following command to setup a windows service:

redis-server –-service-install

Please notice that there are two “-” in front of the “service-install” instruction.  Once this is setup, then Redis will start every time you start your PC.

The Data-tier

The DAC project contains the POCOs, the fluent configurations and the context object.  There is one interface for the context object and that’s for AutoFac’s use:

builder.Register(c => new DatabaseContext("Server=SQL_INSTANCE_NAME;Initial Catalog=DemoData;Integrated Security=True"))
    .As<IDatabaseContext>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

The context string should be read from the configuration file before being injected into the constructor shown above, but I’m going to keep this simple and leave out the configuration pieces.

Business Logic

The business logic library is just one project that contains all the complex classes and methods that will be called by the API.  In a large application you might have two or more business logic projects.  Typically though, you’ll divide your application into independent APIs that will each have their own business logic project as well as all the other wire-up projects shown in this example.  By dividing your application by function you’ll be able to scale your services according to which function uses the most resources.  In summary, you’ll put all the complicated code inside this project and your goal is to apply unit tests to cover all combination of features that this business logic project will contain.

This project will be wired up by AutoFac as well and it needs the caching and the data tier to be established first:

builder.Register(c => new SalesProducts(c.Resolve<IDatabaseContext>(), c.Resolve<IRedisCache>()))
    .As<ISalesProducts>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

As you can see the database context and the redis caching is injected into the constructor of the SalesProjects class.  Typically, each class in your business logic project will be registered with AutoFac.  That ensures that you can treat each object independent of each other for unit testing purposes.

Unit Tests

There is one sample unit test that performs a test on the SalesProducts.Top10ProductNames() method.  This test only tests the instance where there are more than 10 products and the expected count is going to be 10.  For effective testing, you should test less than 10, zero, and exactly 10.  The database context is mocked using moq.  The Redis caching system is faked using the interfaces supplied by StackExchange.  I chose to setup a dictionary inside the fake object to simulate a cached data point.  There is no check for cache expire, this is only used to fake out the caching.  Technically, I could have mocked the caching and just made it return whatever went into it.  The fake cache can be effective in testing edit scenarios to ensure that the cache is cleared when someone adds, deletes or edits a value.  The business logic should handle cache clearing and a unit test should check for this case.

Other Tests

You can test to see if the real Redis cache is working by starting up SQL Server Management Studio and running the SQL Server Profiler.  Clear the profiler, start the MVC application.  You should see some activity:

Then stop the MVC program and start it again.  There should be no change to the profiler because the data is coming out of the cache.

One thing to note, you cannot use IQueryable as a return type for your query.  It must be a list because the data read from Redis is in JSON format and it’s de-serialized all at once.  You can de-searialize and serialize into a List() object.  I would recommend adding a logger to the cache object to catch errors like this (since there are try/catch blocks).

Another aspect of using an IOC container that you need to be conscious of is the scope.  This can come into play when you are deploying your application to a production environment.  Typically developers do not have the ability to easily test multi-user situations, so an object that has a scope that is too long can cause cross-over data.  If, for instance, you set your business logic to have a scope of SingleInstance() and then you required your list to be special to each user accessing your system, then you’ll end up with the data of the first person who accessed the API.  This can also happen if your API receives an ID to your data for each call.  If the object only reads the data when the API first starts up, then you’ll have a problem.  This sample is so simple that it only contains one segment of data (top 10 products).  It doesn’t matter who calls the API, they are all requesting the same data.

Other Considerations

This project is very minimalist, therefore, the solution does not cover a lot of real-world scenarios.

  • You should isolate your interfaces by creating a project just for all the interface classes.  This will break dependencies between modules or dlls in your system.
  • As I mentioned earlier, you will need to move all your configuration settings into the web.config file (or a corresponding config.json file).
  • You should think in terms of two or more instances of this API running at once (behind a load-balancer).  Will there be data contention?
  • Make sure you check for any memory leaks.  IOC containers can make your code logic less obvious.
  • Be careful of initialization code in an object that is started by an IOC container.  Your initialization might occur when you least expect it to.

Where to Get The Code

You can download the entire solution from my GitHub account by clicking here.  You’ll need to change the database instance in the code and you’ll need to setup a redis server in order to use the caching feature.  A sql server script is provided so you can create a blank test database for this project.

 

DotNet Core vs. NHibernate vs. Dapper Smackdown!

The Contenders

Dapper

Dapper is a hybrid ORM.  This is a great ORM for those who have a lot of ADO legacy code to convert.  Dapper uses SQL queries and parameters can be used just like ADO, but the parameters to a query can be simplified into POCOs.  Select queries in Dapper can also be translated into POCOs.  Converting legacy code can be accomplished in steps because the initial pass of conversion from ADO is to add Dapper, followed by a step to add POCOs, then to change queries into LINQ (if desired).  The speed difference in my tests show that Dapper is better than my implementation of ADO for select queries, but slower for inserts and updates.  I would expect ADO to perform the best, but there is probably a performance penalty for using the data set adapter instead of the straight sqlCommand method.

If you’re interested in Dapper you can find information here: Stack Exchange/Dapper.   Dapper has a NuGet package, which is the method I used for my sample program.

ADO

I rarely use ADO these days, with the exception of legacy code maintenance or if I need to perform some sort of bulk insert operation for a back-end system.  Most of my projects are done in Entity Framework, using the .Net Core or the .Net version.  This comparison doesn’t feel complete without including ADO, even though my smackdown series is about ORM comparisons.  So I assembled a .Net console application with some ADO objects and ran a speed test with the same data as all the ORM tests.

NHibernate

NHiberate is the .Net version of Hibernate.  This is an ORM that I used at a previous company that I worked for.  At the time it was faster than Entity Framework 6 by a large amount.  The .Net Core version of Entity Framework has fixed the performance issues of EF and it no longer makes sense to use NHibernate.  I am providing the numbers in this test just for comparison purposes.  NHibernate is still faster than ADO and Dapper for everything except the select.  Both EF-7 and NHibernate are so close in performance that I would have to conclude that they are the same.  The version of NHibernate used for this test is the latest version as of this post (version 4.1.1 with fluent 2.0.3).

Entity Framework 7 for .Net Core

I have updated the NuGet packages for .Net Core for this project and re-tested the code to make sure the performance has not changed over time.  The last time I did a smackdown with EF .Net Core I was using .Net Core version 1.0.0, now I’m using .Net Core 1.1.1.  There were not measurable changes in performance for EF .Net Core.

The Results

Here are the results side-by-side with the .ToList() method helper and without:

Test for Yourself!

First, you can download the .Net Core version by going to my GitHub account here and downloading the source.  There is a SQL script file in the source that you can run against your local MS SQL server to setup a blank database with the correct tables.  The NHibernate speed test code can also be downloaded from my GitHub account by clicking here. The ADO version is here.  Finally, the Dapper code is here.  You’ll want to open the code and change the database server name.

 

Dot Net Core and NuGet Packages

One of the most frustrating changes made in .Net Core is the NuGet package manager.  I like the way the new package manager works, unfortunately, there are still a lot of bugs in the way the package manager works.  I call it the tyranny of intelligent software.  The software is supposed to do all the intelligent work for you, leaving you to do your work as a developer and create your product.  Unfortunately, one or more bugs cause errors to occur and then you have to out-think the smart software and try and figure out what it was supposed to do.  When smart software works, it’s magical.  When it doesn’t work, life as a developer can be miserable.

I’m going to show you how the NuGet manager works in .Net Core and I’ll show you some tricks you’ll need to get around problems that might arise.  I’m currently using Visual Stdio 2015 Community.

The Project.json File

One of the great things about .Net Core is the new project.json file.  You can literally type or paste in the name of the NuGet package you want to include in your project and it will synchronize all the dlls that are needed for that package.  If look real close, you’ll see a triangle next to the file.  There is another file that is automatically maintained by the package manager called the project.lock.json file.  This file is excluded from TFS check-in because it can be automatically re-generated from the project.json file.  You can open the file up and observe the thousands of lines of json data that is stuffed into this file.  Sometimes this file contains old versions of dlls, especially if you created your project months ago and now you want to update your NuGet packages.  If your dependencies section in your project.json file are all flagged as errors, there could be a conflict in the lock file.  You can hold your cursor over any NuGet package and see what the error is, but sometimes that is not very helpful.

To fix this issue, you can regenerate the lock file.  Just delete the file from the solution explorer.  Visual studio should automatically restore the file.  If not, then open up your package manager console window.  It should be at the bottom of visual studio, or you can go to “Tools -> NuGet Package Manager -> Package Manager Console”.    Type “dotnet restore” in the console window and wait for it to complete.

The NuGet Local Cache

When the package manager brings in packages from the Internet, it keeps a copy of each package in a cache.  This is the first place where the package manager will look for a package.  If you use the same package in another project, you’ll notice that it doesn’t take as much time to install it as it did the first time.  The directory is under your user directory.  Go to c:\Users, then find the directory with your user name (the name you’re currently logged in as, or the user name that was setup for your computer when you installed your OS).  Then you’ll see a folder named “.nuget” Open that folder and drill down into “packages”.  You should see about a billion folders with packages that you’ve installed since you started using .Net Core.  You can select all of these and delete them.  Then you can go back to your solution and restore packages.  It’ll take longer than normal to restore all your packages because they must be read from the Internet first.

An easier method to clearing this cache is to go to the package manager console and type in:

nuget locals all -clear

If you have your .nuget/packages folder open, you’ll see all the sub directories disappear.

If the nuget command does not work in Visual Studio, you’ll have to download the NuGet.exe file from here.  Get the recommended latest.  Then search for your nuget execution directory.  For VS2015 it is:

C:\Program Files (x86)\NuGet\Visual Studio 2015

Drop the EXE file in that directory (There is probably already a vsix file in there).  Then make sure that your system path contains the directory.  I use Rapid Environment Editor to edit my path, you can download and install that application from here.  Once you have added to your PATH, then exit Visual Studio and start it back up again.  Now your “nuget” command should work in the package manager console command line.

NuGet Package Sources

If you look at the package console window you’ll see a drop-down that normally shows “nuget.org”.  There is a gear icon next to the drop-down.  Click it and you’ll see the “Package Sources” window.  This window has the list of locations where NuGet packages will be searched for.  You can add your own local packages if you would like, and then add a directory to this list.  You can also update the list with urls that are shown at the NuGet site.  Go to www.nuget.org and look for the “NuGet Feed Locations” header.  Below that is a list of urls that you can put into the package sources window.  As of this blog post, there are two URLs:

Sometimes you’ll get an error when the package manager attempts to update your packages.  If this occurs, it could be due to a broken url to a package site.  There is little you can do about the NuGet site.  If it’s down, you’re out.  Fortunately, that’s a rare event.  For local package feeds, you can temporarily turn them off (assuming your project doesn’t use any packages from your local site).  To turn off one feed, you can go to the “Package Sources” window and just uncheck the check box.  Just selecting one package feed from the drop-down does not prevent the package manager from checking and failing from a bad local feed.

Restart Visual Studio

One other trick that I’ve learned, is to restart Visual Studio.  Sometimes the package manager just isn’t behaving itself.  It can’t seem to find any packages and your project has about 8,000 errors consisting of missing dependencies.  In this instance, I’ll clear the local cache and then close Visual Studio.  Then re-open Visual Studio with my solution and perform a package restore.

Package Dependency Errors

Sometimes there are two or more versions of the same package in your solution.  This can cause dependency errors that are tough to find.  You’ll get a dependency error in one project that has a package version that is newer than the same package in another project that your current project is dependent on.

To find these problems, you can right click on the solution and select “Manage NuGet Packages for Solution…” then click on each package name and look at the check boxes on the right.  If you see two different versions, update all projects to the same version:

Finally

I hope these hints save you a lot of time and effort when dealing with packages.  The problems that I’ve listed here have appeared in my projects many times.  I’m confident you’ll run into them as well.  Be sure and hit the like button if you found this article to be helpful.

 

DBContextOptionsBuilder does not contain a definition for ‘UseSqlServer’

Attempting to use the correct NuGet packages for your code in .Net Core can be challenging.  In this instance there is no project.json error and yet this one property is missing:

This will happen when your EF database project contains at least these two NuGet packages:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

What’s missing is the sql server package.  It took some trial and error to find the right version:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "Microsoft.EntityFrameworkCore.SqlServer": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

The easiest way to find the latest version of your packages is to delete from the first digit decimal to the end of the version number and type a “.”:

As you can see from the drop-down that appears, version 1.1.1 is the latest current version (by the time you read this, there could be a newer version).  When I was attempting to fix this problem, there were a lot of forums indicating that the person needed to add “using Microsoft.Data.Entity;” but that’s not the solution in this instance.

I’m posting this on my blog so I have a reference if I run into this problem again.  Hopefully this will help those who got stuck on this crazy minor issue and can’t find a working solution.

 

Dot Net Core Using the IOC Container

I’ve talked about Inversion Of Control in previous posts, but I’m going to go over it again.  If you’re new to IOC containers, breaking dependencies and unit testing, then this is the blog post you’ll want to read.  So let’s get started…

Basic Concept of Unit Testing

Developing and maintaining software is one of the most complex tasks ever performed by humans.  Software can grow to proportions that cannot be understood by any one person at a time.  To compound the issue of maintaining and enhancing code, there is the problem that one small change in code can affect the operation of something that seems unrelated.  Engineers that build something physical, like say a jumbo jet can identify a problem and fix it.  They usually don’t expect a problem with the wing to affect the passenger seats.  In software, all bets are off.  So there needs to be a way to test everything when a small change is made.

The reason you want to create a unit test is to put in place a tiny automatic regression test.  This test is executed every time you change code to add an enhancement.  If you change some code, the test runs and ensures that you didn’t break a feature that you already coded and tested previously.  Each time you add one feature, you add a unit test.  Eventually, you end up with a collection of unit tests covering each combination of features used by your software.  These tests ride along with your source code forever.  Ideally, you want to always regression test every piece of logic that you’ve written.  In theory this will prevent you from breaking existing code when you add a new enhancement.

To ensure that you are unit testing properly, you need to understand coverage.  Coverage is not everything, but it’s a measurement of how much of your code is covered by your unit tests and you should strive to maximize this.  There are tools that can measure this for you, though some are expensive.  One aspect of coverage that you need to be aware of is the combination “if” statement:

if (input == 'A' || input =='B')
{
    // do something
}

This is a really simple example, but your unit test suite might contain a test that feeds the character A into the input and you’ll get coverage for the inner part of the if statement.  However, you have not tested when the input is B and that input might be used by other logic in a slightly different way.  Technically, we don’t have 100% coverage.  I just want you to be aware that this issue exists and you might need to do some analysis of your code coverage when you’re creating unit tests.

One more thing about unit tests and this is very important to keep in mind.  When you deploy this software and bugs are reported, you will need to add a unit test for each bug reported.  The unit test must break your code exactly the way the bug did.  Then you fix the bug and that prevents any other developer from undoing your bug fix.  Of course, your bug fix will be followed by another unit test suite run to make sure you didn’t break any thing else.  This will help you make forward progress in your quest for bug-free or low-bug software.

Dependencies

So you’ve learned the basics of unit test writing and you’re creating objects and and putting one or more unit tests on each method.  Suddenly you run into an issue.  Your object connects to a device for input.  An example is that you read from a text file or you connect to a database to read and write data.  Your unit test should never cause files to be written or data to be written to a real database.  It’s slow, the data being written would need to be cleaned out when the test completed.  What if the tests fail?  Your test data might still be in the database.  Even if you setup a test database, you would not be able to run two versions of your unit tests at the same time (think of two developers executing their local copy of the unit test suite).

The device being used is called a dependency.  The object depends on the device and it cannot operate properly without the device.  To get around dependencies, we need to create a fake or mock database or a fake file I/O object to put in place of the real database or file I/O when we run our unit tests.  The problem is that we need to somehow tell the object under test to use the fake or mock instead of the real thing.  The object must also default to the real database or file I/O when not under test.

The current trend in breaking dependencies involves a technique called Inversion Of Control or IOC.  What IOC does is allow us to define all object create points at program startup time.  When unit tests are run, we substitute the objects that perform database and I/O functions with fakes.  Then we call our objects under test and the IOC system takes care of wiring the correct dependencies together.  Sounds easy.

IOC Container Basics

Here are the basics of how an IOC container works.  I’m going to cut out all the complications involved and keep this super simple.

First, there’s the container.  This is a dictionary of interfaces and classes that is used as a lookup.  Basically, you create your object and then you create a matching interface for your object.  When you call one object from another, you use the interface to lookup which class to call from your object.  Here’s a diagram of object A dependent on object B:

Here’s a tiny code sample:

public class A
{
  public void MyMethod()
  {
    var b = new B();

    b.DependentMethod();
    }
}

public class B
{
  public void DependentMethod()
  {
    // do something here
  }
}

As you can see, class B is created inside class A.  To break the dependency we need to create an interface for each class and add them to the container:

public interface IB
{
  void DependentMethod();
}

public interface IA
{
  void MyMethod();
}

Inside Program.cs:

var serviceProvider = new ServiceCollection()
  .AddSingleton<IB, B>()
  .AddSingleton<IA, A>()
  .BuildServiceProvider();

var a = serviceProvider.GetService<IA>();
a.MyMethod();

Then modify the existing objects to use the interfaces and provide for the injection of B into object A:

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public void MyMethod()
  {
    _b.DependentMethod();
  }
}

public class B : IB
{
  public void DependentMethod()
  {
    // do something here
  }
}

The service collection object is where all the magic occurs.  This object is filled with definitions of which interface will be matched with which class.  As you can see by the insides of class A, there is no more reference to the class B anywhere.  Only the interface is used to reference any object that is passed (called injected) into the constructor that conforms to IB (interface B).  The service collection will lookup IB and see that it needs to create an instance of B and pass that along.  When the MyMethod() is executed in A, it just calls the _b.DependendMethod() method without worrying about the actual instance of _b.  What does that do for us when we are unit testing?  Plenty.

Mocking an Object

Now I’m going to use a NuGet package called Moq.  This framework is exactly what we need because it can take an interface and create a fake object that we can apply simulated outputs to.  First, lets modify our A and B class methods to return some values:

public class B : IB
{
  public int DependentMethod()
  {
    return 5;
  }
}

public interface IB
{
  int DependentMethod();
}

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public int MyMethod()
  {
    return _b.DependentMethod();
  }
}

public interface IA
{
  int MyMethod();
}

I have purposely kept this so simple that there’s nothing being done.  As you can see, DependentMethod() just returns the number 5 in real life.  Your methods might perform a calculation and return the result, or you might have a random number generator or it’s a value read from your database.  This example just returns 5 and we don’t care about that because our mock object will return any value we want for the unit test being written.

Now the unit test using Moq looks like this:

[Fact]
public void ClassATest1()
{
    var mockedB = new Mock<IB>();
    mockedB.Setup(b => b.DependentMethod()).Returns(3);

    var a = new A(mockedB.Object);

    Assert.Equal(3, a.MyMethod());
}

The first line of the test creates a mock of object B called “mockedB”.  The next line creates a fake return for any call to the DependentMethod() method.  Next, we create an instance of class A (the real class) and inject the mocked B object into it.  We’re not using the container for the unit test because we don’t need to.  Technically, we could create a container and put the mocked B object into one of the service collection items, but this is simpler.  Keep your unit tests as simple as possible.

Now that there is an instance of class A called “a”, we can assert to test if a.MyMethod() returns 3.  If it does, then we know that the mocked object was called by object “a” instead of a real object of class A (since that always returns a 5).

Where to Get the Code

As always you can get the latest code used by this blog post at my GitHub account by clicking here.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

 

Get ASP.Net Core Web API Up and Running Quickly

Summary

I’m going to show you how to setup your environment so you can get results from an API using ASP.Net Core quickly.  I’ll discuss ways to troubleshoot issues and get logging and troubleshooting tools working quick.

ASP.Net Core Web API

Web API has been around for quite some time but there are a lot of changes that were made for .Net Core applications.  If you’re new to the world of developing APIs, you’ll want to get your troubleshooting tools up quickly.  As a seasoned API designer I usually focus on getting my tools and logging up and working first.  I know that I’m going to need these tools to troubleshoot and there is nothing worse than trying to install a logging system after writing a ton of code.

First, create a .Net API application using Visual Studio 2015 Community edition.  You can follow these steps:

Create a new .Net Core Web Application Project:

Next, you’ll see a screen where you can select the web application project type (select Web API):

A template project will be generated and you’ll have one Controller called ValuesController.  This is a sample REST interface that you can model other controllers from.  You’ll want to setup Visual Studio so you can run the project and use break-points.  You’ll have to change your IIS Express setting in the drop-down in your menu bar:

Select the name of the project that is below IIS Express (as shown in yellow above).  This will be the same as the name of your project when you created it.

Your next task is to create a consumer that will connect to your API, send data and receive results.  So you can create a standard .Net Console application.  This does not need to be fancy.  It’s just a throw-away application that you’ll use for testing purposes only.  You can use the same application to test your installed API just by changing the URL parameter.  Here’s how you do it:

Create a Console application:

Give it a name and hit the OK button.

Download this C# source file by clicking here.  You can create a cs file in your console application and paste this object into it (download my GitHub example by clicking here).  This web client is not necessary, you can use the plain web client object, but this client can handle cookies.  Just in case you decide you need to pass a cookie for one reason or another.

Next, you can setup a url at the top of your Program.cs source:

private static string url = "http://localhost:5000";

The default URL address is always this address, including the port number (the port does not rotate), unless you override it in the settings.  To change this information you can go into the project properties of your API project and select the Debug tab and change it.

Back to the Console application…

Create a static method for your first API consumer.  Name it GetValues to match the method you’ll call:

private static object GetValues()
{
	using (var webClient = new CookieAwareWebClient())
	{
		webClient.Headers["Accept-Encoding"] = "UTF-8";
		webClient.Headers["Content-Type"] = "application/json";

		var arr = webClient.DownloadData(url + "/api/values");
		return Encoding.ASCII.GetString(arr);
	}
}

Next, add a Console.Writeline() command and a Console.ReadKey() to your main:

static void Main(string[] args)
{
	Console.WriteLine(GetValues());

	Console.ReadKey();
}

Now switch to your API project and hit F-5.  When the blank window appears, then switch back to your consumer console application and hit F-5.  You should see something like this:

If all of this is working, you’re off to a good start.  You can put break-points into your API code and troubleshoot inputs and outputs.  You can write your remaining consumer methods to test each API that you wrote.  In this instance, there are a total of 5 APIs that you can connect to.

Logging

Your next task is to install some logging.  Why do you need logging?  Somewhere down the line you’re going to want to install this API on a production system.  Your system should not contain Visual Studio or any other tools that can be used by hackers or drain your resources when you don’t need them.  Logging is going to be your eyes on what is happening with your API.  No matter how much testing you perform on your PC, you’re not going to get a fully loaded API and there are going to be requests that are going to hit your API that you don’t expect.

Nicholas Blumhardt has an excellent article on adding a file logger to .Net Core.  Click here to read it.  You can follow his steps to insert your log code.  I changed the directory, but used the same code in the Configure method:

loggerFactory.AddFile("c:/logs/myapp-{Date}.txt");

I just ran the API project and a log file appeared:

This is easier than NLog (and NLog is easy).

Before you go live, you’ll probably want to tweak the limits of the logging so you don’t fill up your hard drive on a production machine.  One bot could make for a bad day.

Swashbuckle Swagger

The next thing you’re going to need is a help interface.  This interface is not just for help, it will give interface information to developers who wish to consume your APIs.  It can also be useful for troubleshooting when your system goes live.  Go to this website and follow the instructions on how to install and use Swagger.  Once you have it installed you’ll need to perform a publish to use the help.  Right-click on the project and select “Publish”.  Click on “Custom” and then give your publish profile a name.  Then click the “Publish” button.

Create an IIS website (open IIS, add a new website):

The Physical Path will link to your project directory in the bin/Release/PublishOutput folder.  You’ll need to make sure that your project has IUSR and IIS_IUSRS permissions (right-click on your project directory, select the security tab.  Then add full rights for IUSR and do the same for IIS_IUSRS).

You’ll need to add the url to your hosts file (c:\Windows\System32\drivers\etc folder)

127.0.0.1 MyDotNetWebApi.com

Next, you’ll need to adjust your application pool .Net Framework to “No Managed Code”.  Go back to IIS and select “Application Pools”:

Now if you point your browser to the URL that you created (MyDotNetWebApi.com in this example), then you might get this:

Epic fail!

OK, it’s not that bad.  Here’s how to troubleshoot this type of error.

Navigate to your PublishOutput folder and scroll all the way to the bottom.  Now edit the web.config file.  Change your stdoutLogFile to “c:\logs\stdout”

Refresh your browser to make it trigger the error again.  Then go to your c:\logs directory and check out the error log.  If you followed the instructions on installing Swagger like I did, you might have missed the fact that this line of code:

var pathToDoc = Configuration["Swagger:Path"];

Requires an entry in the appsettings.json file:

"Swagger": {
  "Path": "DotNetWebApi.xml"
}

Now go to your URL and add the following path:

www.yoururl.com/swagger/ui

Next, you might want to change the default path.  You can set the path to another path like “help”.  Just change this line of code:

app.UseSwaggerUi("help");

Now you can type in the following URL to see your API help page:

www.yoururl.com/help

To gain full use of Swagger, you’ll need to comment your APIs.  Just type three slashes and a summary comment block will appear.  This information is used by Swagger to form descriptions in the help interface.  Here’s an example of commented API code and the results:

Update NuGet Packages

.Net Core allows you to paste NuGet package information directly into the project.json file.  This is convenient because you don’t have to use the package manager to search for packages.  However, the versions of each package are being updated at a rapid rate, so even for the project template packages there are updates.  You can startup your Manage NuGet Packages window and click on the “Updates” tab.  Then update everything.

The downside of upgrading everything at once is that you’ll probably break something.  So be prepared to do some troubleshooting.  When I upgraded my sample code for this blog post I ran into a target framework runtime error.

Other Considerations

Before you deploy an API, be sure to understand what you need as a minimum requirement.  If your API is used by your own software and you expect to use some sort of security or authentication to keep out unwanted users, don’t deploy before you have added the security code to your API.  It’s always easier to test without using security, but this step is very important.

Also, you might want to provide an on/off setting to disable the API functions in your production environment for customers until you have fully tested your deployment.  Such a feature can be used in a canary release, where you allow some customers to use the new feature for a few days before releasing to all of your customers.  This will give you time to estimate load capabilities of your servers.

I also didn’t discuss IOC container usage, unit testing, database access, where to store your configuration files, etc.  Be sure and set a standard before you go live.

One last thing to consider is the deployment of an API.  You should create an empty API container and check it into your version control system.  Then create a deployment package to be able to deploy to each of your environments (Development, QA, stage, production, etc.).  The sooner you get your continuous integration working, the less work it will be to get your project completed and tested.  Manual deployment, even for a test system takes a lot of time.  Human error being the number one killer of deployment efficiency.

Where to Get the Code

As always, you can download the sample code at my GitHub account by clicking here (for the api code) and here (for the console consumer code).  Please hit the “Like” button at the end of this article if this subject was helpful!