Unit Testing EF Data With Core 2.0

I’ve discussed using the in-memory database object for Entity Framework in the blog post titled: Dot Net Core In Memory Unit Testing Using xUnit. That was for .Net Core 1.1.  Now I’m going to show a larger example of the in-memory database object for .Net Core 2.0.  There is only one minor difference in the code between the version 1 and version 2 and it involves the DbContextOptionsBuilder object.  The UseInMemoryDatabase() method with no parameters, has been deprecated, so you’ll need to pass a database name into the method.  I used my standard “DemoData” database name as a parameter for the sample code in this post.  I could have used any variable, because my sample has no queries that reference any particular database name.

My .Net Core 1.1 version unit test code worked for the simple query examples that I used in that demo.  Unfortunately, that sample is not “real world.”  The unit tests only tested some Linq queries, which serves no real purpose.  When I wrote the article, I wanted to show the easiest example of using the in-memory database object and that culminated in the sample that I posted.  This time, I’m going to build a couple of business classes and unit test the methods inside those classes.  The business classes will contain the Linq queries and perform some functions that are a bit more realistic.  Though, this is still a toy program and a contrived example.

I also added a couple of features to provide situations you’ll run into in the real world when using Entity Framework with an IOC container.  I did not provide any IOC container code or any project that would execute the EF code against MS SQL server, so this project is still just a unit test project and a business class project.

The Inventory Class

The first class I added was an inventory class.  The purpose of this class will be to get information about the inventory of a store.  I designed a simple method that requires a store name and the product name and it will return a true if it is in stock:

public class Inventory
{
  private readonly IStoreAppContext _storeAppContext;

  public Inventory(IStoreAppContext storeAppContext)
  {
    _storeAppContext = storeAppContext;
  }

  public bool InStock(string storeName, string productName)
  {
    var totalItems = (from p in _storeAppContext.Products
      join s in _storeAppContext.Stores on p.Store equals s.Id
      where p.Name == productName &&
            s.Name == storeName
      select p.Quantity).Sum();

    return totalItems > 0;
  }
}

As you can see from my class constructor, I am using a pattern that allows me to use an IOC container.  This class will need a context to read data from the database and I inject the context for the database into the constructor using an interface.  If you’re following the practice of TDD (Test Driven Development), then you’ll create this object with an empty stub for the InStock method and create the unit test(s) first.  For my example, I’ll just show the code that I already created and tested.  There are a couple of unit tests that are going to be needed for this method.  You could probably create some extra edge tests, but I’m just going to create the two obvious tests to see if the query returns a true when the item is in stock and a false if it is not.  You can add tests for cases like the store does not exist or the item does not exist.  Here are the two unit tests:

[Fact]
public void ItemIsInStock()
{
  // Arrange
  ResetRecords();
  var inventory = new Inventory(_storeAppContext);

  // Act
  var result = inventory.InStock("J-Foods", "Rice");

  // Assert
  Assert.True(result);
}

[Fact]
public void ItemIsNotInStock()
{
  // Arrange
  ResetRecords();
  var inventory = new Inventory(_storeAppContext);

  // Act
  var result = inventory.InStock("J-Foods", "Crackers");

  // Assert
  Assert.False(result);
}

Now you’re probably wondering about that method named “ResetRecords()”.  Oh that!  One of the problems with my previous blog post sample was that I setup the test database data one time in the constructor of the unit test class.  Then I ran some unit tests with queries that performed tests that were not destructive to the data.  In this sample, I’m going to show how you can test methods that delete data.  This test will interfere with other unit tests if the data is not properly restored before each test is run.

Here’s the top part of the unit test class showing the ResetRecords() method:

public class ProductTests : IClassFixture
{
  private readonly StoreAppContext _storeAppContext;
  private readonly TestDataFixture _fixture;

  public ProductTests(TestDataFixture fixture)
  {
    _fixture = fixture;
    _storeAppContext = fixture.Context;
  }

  private void ResetRecords()
  {
    _fixture.ResetData();
  }

As you can see, I had to keep track of the fixture object as well as the context.  The fixture object was needed in order to access the PopulateData() method that is located inside the fixture class:

public class TestDataFixture : IDisposable
{
  public StoreAppContext Context { get; set; }

  public TestDataFixture()
  {
    var builder = new DbContextOptionsBuilder()
      .UseInMemoryDatabase("DemoData");
    Context = new StoreAppContext(builder.Options);
  }

  public void ResetData()
  {
    var allProducts = from p in Context.Products select p;
    Context.Products.RemoveRange(allProducts);

    var allStores = from s in Context.Stores select s;
    Context.Stores.RemoveRange(allStores);

    var store = new Store
    {
      Name = "J-Foods"
    };

    Context.Stores.Add(store);
    Context.Products.Add(new Product
    {
      Name = "Rice",
      Price = 5.99m,
      Quantity = 5,
      Store = store.Id
    });
    Context.Products.Add(new Product
    {
      Name = "Bread",
      Price = 2.35m,
      Quantity = 3,
      Store = store.Id
    });

    var store2 = new Store
    {
      Name = "ToyMart"
    };
    Context.Stores.Add(store2);

    ((DbContext)Context).SaveChanges();
  }

  public void Dispose()
  {

  }
}

Notice how I added lines of code to remove ranges of records in both tables before repopulating them.  The first pass will not need to delete any data because there is no data.  Any calls to the ResetData() method in the TestDataFixture class after the first will need to guarantee that the tables are clean.  In your final implementation, I would recommend creating a cs file for each data set you plan to use and follow something similar to the method above to clean and populate your data.

The next method I wanted to implement involved a computation of the total cost of inventory of a store:

public decimal InventoryTotal(string storeName)
{
  var totalCostOfInventory = (from p in _storeAppContext.Products
    join s in _storeAppContext.Stores on p.Store equals s.Id
    where s.Name == storeName
    select p.Price * p.Quantity).Sum();

  return totalCostOfInventory ?? 0;
}

I came up with two simple unit tests for this method:

[Fact]
public void InventoryTotalTest1()
{
  // Arrange
  ResetRecords();
  var inventory = new Inventory(_storeAppContext);

  // Act
  var result = inventory.InventoryTotal("J-Foods");

  // Assert
  Assert.Equal(37m, result);
}

[Fact]
public void InventoryTotalEmptyStore()
{
  // Arrange
  ResetRecords();
  var inventory = new Inventory(_storeAppContext);

  // Act
  var result = inventory.InventoryTotal("ToyMart");

  // Assert
  Assert.Equal(0m, result);
}

The first unit test is to check to see if the numbers add up correctly.  You might want to add tests to check edge case computations.  I also added a unit test for an empty store.  This is a situation where a store has no inventory and I wanted to make sure that an empty query result didn’t blow up the method.  The second purpose of this unit test is to make other developers aware that any changes that they perform to the inventory computations don’t blow up when the store is empty.  Always remember that unit tests protect you from your own programming mistakes but they also protect you from other developers who might not know what assumptions you made when you designed and built this method.

The Store Class

Now it’s time to test a method that is data destructive.  I decided that there should be a class that is used by an administrator to add, edit and delete stores in the inventory.  The basic CRUD operations.  For my sample, I’m only going to implement the store delete function:

public class StoreMaintenance
{
  private readonly IStoreAppContext _storeAppContext;

  public StoreMaintenance(IStoreAppContext storeAppContext)
  {
    _storeAppContext = storeAppContext;
  }

  public void DeleteStore(string storeName)
  {
    var storeList = (from s in _storeAppContext.Stores
      where s.Name == storeName
      select s).FirstOrDefault();

    if (storeList != null)
    {
      _storeAppContext.Stores.Remove(storeList);
      _storeAppContext.SaveChanges();
    }
  }
}

There was one problem that I ran into when I tried to use the SaveChanges() method.  The problem occurred becuase _storeAppContext is an interface and not an object itself.  So the SaveChanges() method did not exist in the interface itself.  Is I had to add it to the interface.  Then I needed to implement the SaveChanges() method in the StoreAppContext object by calling the base class version inside the method.  That creates an issue because my method is named the same as the method in the DBContext class and therefore it hides the base method.  So I had to add a “new” keyword to notify the compiler that I was overriding the method.  Here’s the final context object:

public class StoreAppContext : DbContext, IStoreAppContext
{
  public StoreAppContext(DbContextOptions options)
  : base(options)
  {

  }

  public DbSet Products { get; set; }
  public DbSet Stores { get; set; }

  public new void SaveChanges()
  {
    base.SaveChanges();
  }

  protected override void OnModelCreating(ModelBuilder modelBuilder)
  {
    modelBuilder.AddProduct("dbo");
    modelBuilder.AddStore("dbo");
  }
}

Here is the final interface to match:

public interface IStoreAppContext : IDisposable
{
  DbSet Products { get; set; }
  DbSet Stores { get; set; }
  void SaveChanges();
}

When deleting a store, I want to make sure all the product child records are deleted with it.  In the database, I can setup a foreign constraint between the product and store tables and then make sure the cascade delete is set on.  In the model builder for the product, I need to make sure that the foreign key is defined and it is also setup as a cascade delete:

public static class ProductConfig
{
  public static void AddProduct(this ModelBuilder modelBuilder, string schema)
  {
    modelBuilder.Entity(entity =>
    {
      entity.ToTable("Product", schema);
      entity.Property(e => e.Name).HasColumnType("varchar(50)");

      entity.Property(e => e.Price).HasColumnType("money");

      entity.Property(e => e.Quantity).HasColumnType("int");

      entity.HasOne(d => d.StoreNavigation)
        .WithMany(p => p.Product)
        .HasForeignKey(d => d.Store)
        .OnDelete(DeleteBehavior.Cascade)
        .HasConstraintName("FK_store_product");
      
    });
  }
}

You can see in the code above that the OnDelete method is setup as a cascade.  Keep in mind that your database must be configured with the same foreign key constraint.  If you fail to set it up in the database, you’ll get a passing unit test and your program will fail during run-time.

I’m now going to show the unit test for deleting a store:

[Fact]
public void InventoryDeleteStoreViolationTest()
{
  // Arrange
  ResetRecords();
  var storeMaintenance = new StoreMaintenance(_storeAppContext);

  // Act
  storeMaintenance.DeleteStore("J-Foods");

  // Assert
  Assert.Empty(from s in _storeAppContext.Stores where s.Name == "J-Foods" select s);

  // test for empty product list
  var productResults = from p in _storeAppContext.Products
    join s in _storeAppContext.Stores on p.Store equals s.Id
    where s.Name == "J-Foods"
    select p;

  Assert.Empty(productResults);
}

As you can see, I performed two asserts in one unit test.  Both unit tests are verifying that the one operation was correct.  It’s not wise to perform too many asserts in a unit test because the difficulty to troubleshoot a failing unit test increases.  Some instances, it doesn’t make sense to break the unit test into multiple tests.  It’s OK to write two unit tests and test the removal of the store record independently from the removal of the product records.  What I’m trying to describe here is: Don’t let this example of two asserts give you license to pack as many asserts as you can into a unit test.  Keep unit tests as small and simple as possible.  It’s preferred to have a larger quantity of tiny unit tests over a small quantity of large and complicated unit tests.

If you analyze the InventoryDeleteStoreViolationTest closely, you’ll realize that it is really testing three things: Does the cascade work, did the store record get deleted and did the child records get deleted.  I would suggest you go back to the OnDelete method in the model builder for the product table and remove it (the entire OnDelete method line).  Then run your unit tests and see what you get.

There is also a test in the DeleteStore method that tests for a search result of null.  This is just in case someone tries to delete a store that was deleted.  This should also have a unit test (I’ll leave that up to the reader to create one).  For a normal CRUD design, you’ll list the stores and then you’ll probably put a delete button (or trash can icon) next to each store name.  Then the administrator clicks the delete button and a pop-up confirms that the store will be deleted if they continue.  The administrator continues, and you’re thinking “how could the query to get the store ever come up empty?”  If only one person had access to that screen and there was no other place in the user interface that allowed the deletion of stores… Plus, nobody had access to the database back-end and couldn’t possibly delete the record by hand… you might think it was safe to assume a record will exist before deleting it.  However, any large system with multiple administrators and other unforeseen activities going on in parallel can culminate in the possibility that the store record exists when the screen is first rendered, but becomes deleted just before the delete operation starts.  Protect yourself.  Never assume the record will be there.

Also, you are programming for how the code will work now.  Several years down the road, as the program grows, another programmer could add a feature to the system allowing a different method of deleting a store.  Suddenly, this screen, which has been “bug free” for years is throwing exceptions.  My rule of thumb is this: If it can be null, it will be null!

One final thing to note: I threw all the unit tests in one unit test class.  There are two business classes under unit tests.  There should be two unit test classes, one for each business class to keep everything clean and simple.  Unit test methods inside a class don’t execute in the order that they are listed.  Don’t be surprised if the last unit test method executes before the first one.  That means that you have to keep in mind that each unit test must be designed to be independent of other unit tests.

Where to Get the Code

You can go to my GitHub account and download the source by clicking here.  I would recommend you download and add a few unit tests of your own to get familiar with the in-memory unit test database feature.

 

JWT Tokens

Summary

In this blog post, I’m going to introduce the JWT (pronounced “jot”) token concept.  I’m going to use the code presented in this article (click here) to demonstrate how a token is created and then I’ll discuss the parts of the token and how it is used.

Sample Token

Using the article code, I created a dumb and dirty console app to generate a token with this final code:

var token = new JwtTokenBuilder()
  .AddSecurityKey(JwtSecurityKey.Create("fiver-secret-key"))
  .AddSubject("james bond")
  .AddIssuer("Fiver.Security.Bearer")
  .AddAudience("Fiver.Security.Bearer")
  .AddClaim("MembershipId", "111")
  .AddExpiry(1)
  .Build();

For a real system, you’ll need to store your secret key and you’ll need to make it much larger.  The other variables will differ according to your use of the token.  Before I go into any details on how you would generate and consume the token, I’m going to discuss what the token looks like and how it works.  First, I generated a token from the above code and obtained this result:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJqYW1lcyBib25kIiwianRpIjoiNDQ2MDU1MmItNWU5MS00MjcyLWIyMDItMmFlYzFmNWFhNGY3IiwiTWVtYmVyc2hpcElkIjoiMTExIiwiZXhwIjoxNTE5NTkwMjY4LCJpc3MiOiJGaXZlci5TZWN1cml0eS5CZWFyZXIiLCJhdWQiOiJGaXZlci5TZWN1cml0eS5CZWFyZXIifQ.C_Z_fLGhlvbhywwGn727mVSxJM8JlQpw8dZmysTgr1w

If you go to the wiki page on the JWT Token (JSON Web Token), you’ll see a description of the three parts of the token.  If you look closely you can see that there are two dots “.” and they separate the three parts of the token.  Each part is a base 64 encoded string so it is safe to transmit over the web.  The first part is the header.  If you cut the first part you’ll get this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

Paste that code into a decoder (click here for an on-line base 64 string decoder), you’ll get this result:

If you dig into the JWT token build method, you’ll see a section where HS256 hashing is used for the signature and this token is a JWT type token.

Take the next block of text (which is long) and copy it (without the dots).  Then put it in the decoder and click on “Decode”.  You should see this:

This is the payload section and you can see all the information that was placed into C# the code earlier.

If you’ve ever used tokens for accessing APIs, you’re probably thinking to yourself: What prevents someone from getting a copy of this token and just changing the information to something else and stealing admin rights or unauthorized data from the API?  That’s where the signature comes in.  The signature is the last part of the token.  The signature represents everything that is in the header and payload, encoded with a secret key and formed into a hash.  The receiver of this token must know the secret key in order to verify the signature.  If anything in the header or payload is altered, the recalculation of the hash will result in a different signature value.

Keep in mind, that the JWT token should not be used to store anything you want to keep secret.  So don’t put your password into the token payload and then expect it to be safe.  Technically, it will be safe from packet sniffers because you should be accessing an API using SSL, but don’t assume that the payload section is encrypted.

If you were to generate another token after the expire time (in my example the expire was set to 1 minute), then the signature will be different.  That’s because the token is time sensitive.  This improves the security, in case the token is intercepted by a third party.  It also prevents a user from reusing the token forever, instead of re-authenticating.  Finally, it prevents a valid token from being altered to obtain data not specified by the original token.

Re-authenticating?  Oh yes.  There needs to be some sort of authentication scheme that allows the user to obtain a token by logging in someplace.  This is up to the system designer to decide how it is accomplished.  There may be a login id and password that the user uses to get a temporary key and access an API by hand (usually using an interface like Swashbuckle/Swagger).  For machine-to-machine communications other protocols can be used.  A token can also be issued for a longer time period for client use.

The token use can be narrowed using the issuer and audience.  If your company has many APIs that are used for different purposes, you might issue a token that is restricted for use on one API.  You can setup your API to reject any token that is not issued by your security API and you can reject tokens where the audience in the token is not serviced by the API.

Let’s say that you want to communicate from one API to another, how would you use the JWT token?

To use the token, you need to put it in the authorization header of the API call:

Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

When you generate your token, you should set the expire time to something like one to four hours.  Once you obtain a token from your source (either you generate the token or you get it from a security API), then you save the token in a cache.  Many articles talk about using a cookie, but an API will typically not use cookies.  If you have only one API instance, you can get away with using a singleton class to save your token.  If you plan to load-balance two or more instances of your API (plan ahead, assume you’ll need to scale up in the future), then you’ll want to store the token in a cache system (such as Redis or Memcached).

When you grab the token from your cache, check the expire and replace the token if it only has a couple of minutes left.  There could be network delays when sending your token to the remote API, even if it’s in the same data center.  If you wait until the last second to renew the token, you’ll run into issues where the consuming API will reject your token.  This problem will increase when your network is experiencing an unexpected latency.

For your consuming API, you must check the signature first.  There is no need to accept a JWT token that has a bad signature.  That just indicates that someone changed the payload data manually.  Once that has been verified, then check the expire date/time and make sure the token is still valid.  Next, you’ll want to check any other restrictions placed on the token (audience and issuer).

What Else is in the Token?

You can store data inside the token.  There is no limit, but you should keep the token as small as possible.  Don’t pass around a 10 megabyte token and expect a fast network.  You should pass some authorization information of the person or machine that authenticated (i.e. user id, claims).  By doing this, you can form a state-less website.  Your front-end might keep the token in a cookie after the user logs in.  That token is then used in the bearer authorization header for every call to your website back-end.  This eliminates the need for a website session (i.e. session-less or state-less).  For this use case, you can have the authenticated user’s id or some sort of temporary key contained in the payload (i.e. generate a new temporary key when the user logs in and store it in the user table, then throw it away when the user logs out).

The data that you store in the payload should be something that is used repeatedly by the APIs or website you are accessing.  Otherwise, if you try to store something like the current page number in the JWT token, then a new token will have to be issued every time you click the “next” button to get a page of data.  If you’re looking for a place to store temporary state variables, I would recommend putting them in a cache and keying the cache to the user id.  Once the token has been authenticated, the user id can be used to grab the data from your cache and you can continue as though you had a session.

Why Use a Token?

When I create a new API, I have a check-list of what goes into the shell of the API:

  1. JWT Token authentication management for all calls
  2. Logger
  3. Help screen (swashbuckle/swagger)
  4. IOC container

No Web API should be without these items.  My use of the help screen allows me to document my API and gives me a method of testing the API directly.  The logger will be necessary to troubleshoot problems and track exceptions once the API is deployed to an environment in the data center.  The IOC container is the framework that allows developers to use unit tests with ease.  Last, but not least, never deploy an API without some sort of security.  Unless you want the entire world to be able to access it.  Install and test your JWT token security first, then you can add controllers and business logic without worrying about a hacker accessing your data before your API goes live (I’m assuming that you might deploy an API for a feature that is turned off).

You might be thinking that your data is open to the public anyway.  Why secure your API?  You might want to limit access to your data via your website or mobile device interface.  If your API is open to the world, some entity, could tap into your API without your knowledge and put a load on your system that you don’t expect.  Don’t assume anything.  A random Facebook developer could create an application that accesses your API and then a half-million Facebook users unwittingly hit your API when they use that app.  Yikes!

If you want the world to have access to your API, then you better be prepared to scale.

 

Versioning Your APIs

Introduction

If you’ve ever written an API and used it in a real-world application, you’ll discover the need to make changes to enhance your software.  The problem with changing an API is that once the interface has been published and used by other applications, the end points cannot be changed.  The best way to deal with this problem is to version your API so that a new version can have different end points.  This will give consumers time to change their code to match the new versions.

One sensitive consumer of API data is the mobile application.  If your company produces mobile applications, then you’ll need to deploy the new mobile app to the app store after the API has been deployed and is able to consume requests.  This creates a chicken or the egg problem of which should go first.  If the API is versioned, then it can be deployed in advance of the mobile application being available for download.  The new mobile app can use the new version of the API while the older mobile app versions still consume data from the previous versions of your API.  This can also avoid the problem of forcing your end users to upgrade ASAP.

Versioning Method 1

To version your API, there is an obvious but painful method of versioning.  This involves using a feature in IIS that allows multiple applications to be created under one website.  The process is to make a copy of the previous version of your API, then make changes to the code to represent the next version of the API.  Next, create a new application in IIS, say “V2.0”.  Then the path to your API will be something like “myapi.com/V2.0/controllername/method”.

Here is a list of the drawbacks to this method:

  • Deployment involves the creation of a new application every time a new version is deployed.
  • Any web.config file in the root directory of IIS would be inherited by all applications.
  • Keeping multiple versions of code becomes difficult to track.
  • Continuous integration becomes a headache because each version will need a deployment.

Several of these issues can be “fixed” by using creative merging/splitting of branches.  The inheritance problem can be fixed by leaving the root directory empty.  The creation of new applications under IIS can be automated through Powershell scripting (deployment process can check if app exists and create it if it doesn’t).

Versioning Method 2

A NuGet package that can be added to your solution to allow automatic versioning of controllers (here’s the package for .Net Core).  There is a great blog post on this package here.  I looked over the blog post and decided to do a little testing of my own.  I tested a sample Web API project for .Net Core to see if I could get the same results as duplicating projects and installing them as applications under IIS.  This is what I came up with for my version 2 controller:

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebApiVersionedSample.Controllers
{
  [ApiVersion("2.0")]
  [ApiVersion("2.1")]
  [Route("v{version:apiVersion}/Values")]
    public class Values2Controller : Controller
    {
      [HttpGet, MapToApiVersion("2.0")]
      public IEnumerable GetV20()
      {
          return new string[] { "value1 - version 2", "value2 - version 2" };
      }

      [HttpGet, MapToApiVersion("2.1")]
      public IEnumerable GetV21()
      {
        return new string[] { "value1 - version 2.1", "value2 - version 2.1" };
      }

      [HttpGet("{id}", Name = "Get"), MapToApiVersion("2.0")]
      public string Get20(int id)
      {
          return $"id={id} version 2.0";
      }

      [HttpGet("{id}", Name = "Get"), MapToApiVersion("2.1")]
      public string Get21(int id)
      {
        return $"id={id} version 2.1";
      }
    }
}

As you can see, I set this up to accept version 2.0 or version 2.1, and I removed the “api” default path.  If you specify version 2.0, your consumer application can only see the GetV20 method for a get operation and your application will see Get20(int id) for any get method that passes and integer id variable.  In my sample code, I only printed the version number to show what code was executed when I selected a particular version.  Normally, you’ll call a business class from your method and that business class can be shared between two or more versions if the functionality didn’t change.  If, however, you have different logic in your business class between version 2.0 and version 2.1, then you’ll need to create another business class to call from your get method for version 2.1 and leave your version 2.0 untouched.

If you want to keep things simple, you can start a new version by creating new controllers for each endpoint and just add the version number to the end of the name.  Then you can change any one or more get, post, put or delete to conform to your new version.  Just be aware that this logic will need to continue into your business classes if necessary.

For my version 1.0 controller, as an example, I used the ValuesController object with an attribute at the top like so:

[ApiVersion("1.0")]
[Route("v{version:apiVersion}/Values")]
public class ValuesController : Controller

The “Route” shows how the version number precedes “/Values” controller name.  To access this in your browser hit F5 and change the version number.

Example: “http://localhost:64622/v1.0/values” or “http://localhost:64622/v2.1/values”.

To change your startup browser location, expand your properties and double-click on the launchSettings.json file:

Now you can change the application url and the launchUrl:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:64622/",
      "sslPort": 0
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "v1.0/values",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "WebApiVersionedSample": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "v1.0/values",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      },
      "applicationUrl": "http://localhost:64623/"
    }
  }
}

This method of versioning your API has a few advantages over the previous method:

  • Deployments are set.  Add new version code and re-deploy.
  • No need to keep multiple copies of source code.
  • No IIS changes are necessary.

There are some potential pitfalls that developers will need to be aware of.  One complex situation is the possibility of a database change.  This can ripple down to a change to your Entity Framework POCOs, and possibly your context object.  If you are cautious and add non-breaking changes (like adding a new field), then you can change your database repository code without breaking your previous versions.  If you have breaking changes (such as a change to a stored procedure), then you’ll need to get creative and design it so both the old version and new version of your code still work together.

Where to Find the Code

You can download the sample code used in this blog post by going to my GitHub account (click here).

 

Dynamic or Extended Field Data

Summary

SQL Databases are very powerful mechanisms for storing and retrieving data.  With careful design, a SQL database can store large quantities of records and retrieve them at lightning speeds.  The downside to a SQL database is that they are very rigid structures and, if not carefully designed, they can become slow and unwieldy.  There is a common mechanism that allows a database to be dynamically extended.  This mechanism can allow the customer to add a field to a table that represents a holding place for data that the system was never designed to hold.  Such a mechanism is used in SAS (Software As a Service) systems where there are multiple customers with different data needs that cannot be accommodated by enhancements by the SAS company.  I’m going to show several techniques that I’ve seen used, as well as their pros and cons, and then I’m going to show an effective technique for performing this function.

Fixed Fields

I’m going to start with one of the worse techniques I’ve seen used.  The fixed field technique.  The gist of this technique is that each table that allows extended data will have multiple extended fields.  Here’s a sample table (called productextended):

If you’re not cringing after looking at the sample table design above, then you need to read on.  The table above is only a small sample of systems that I’ve seen in production.  Production systems I’ve worked on have more than 10 text, 10 datetime, 10 shortdate, 10 int, 10 float, etc.  In order to make such a system work there is usually some sort of dictionary of information stored that tells what each field is used for.  Then there is a configuration screen that allows the customer to choose what each of those fixed fields can be used as.

Here is an example meta data lookup table with fields matching the sample data in the extended table above:

If you examine the lookup table above, you’ll see that the Money1 field represents a sale price and the Bit1 field represents a flag indicating that the product is sold out.  There is no relational integrity between these tables because normalization between these two does not exist.  The tables are not relational.  If you delete a data type in the meta data lookup table and re-use the original field to represent other data, then the data that existed will become the new data.  You’ll need to write special code to handle a situation when a field is re-used for other purposes.  Your code would need to delete data from that field for the entire productextended table.

I’m going to list some obvious disadvantages to using this technique:

  • There are a limited number of extended fields available per table
  • Tables are extra wide and slow
  • First normal form is broken, no relational integrity constraints can be used.

Using XML Field

The second example that I’ve seen is the use of one extended field of XML data type.  This was a very clever idea involving one field on each table called “extended” and it was setup to contain XML data.  The data stored in this field is serialized and de-serialized by the software and then the actual data is read from a POCO object.  Here’s a sample table:

This is less cringe-worthy than the previous example.  The advantage to this setup is that the table width is still manageable and first normal form has not been broken (although, there is no relation to the extended data).  If the extended field is being serialized and de-serialized into a POCO, then the customer will not be able to change field data on the fly, unless the POCO contains some clever setup, like an array of data fields that can be used at run-time (my example will show this ability).

Here is a sample POCO for the ProducXml table:

public class ProductXml
{
    public XElement XmlValueWrapper
    {
        get { return XElement.Parse(Extended); }
        set { Extended = value.ToString(); }
    }

    public int Id { get; set; }
    public int Store { get; set; }
    public string Name { get; set; } 
    public decimal? Price { get; set; }
    public string Extended { get; set; }

    public virtual Store StoreNavigation { get; set; }
}

The POCO for the xml data looks like this:

public class ExtendedXml
{
    public decimal SalePrice { get; set; }
    public bool OutOfStock { get; set; }
    public DataRecord ExtendedData = new DataRecord();
}

public class DataRecord
{
    public string Key { get; set; }
    public string Value { get; set; }
}

To make this work, you’ll need to tweak the EF configuration to look something like this:

public static class ProductXmlConfig
{
    public static void ProductMapping(this ModelBuilder modelBuilder)
    {
        modelBuilder.Entity(entity =>
        {
            entity.ToTable("ProductXml");

            entity.HasKey(e => e.Id);
            entity.Property(e => e.Id).HasColumnName("Id");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.Price).HasColumnType("money");
            entity.Property(c => c.Extended).HasColumnType("xml");
            entity.Ignore(c => c.XmlValueWrapper);

            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.ProductXml)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product_xml");
        });
    }
}

Now the Linq code to insert a value could look like this:

using (var db = new DatabaseContext())
{
    var extendedXml = new ExtendedXml
    {
        SalePrice = 3.99m,
        OutOfStock = false,
        ExtendedData = new DataRecord
        {
            Key = "QuantityInWarehouse",
            Value = "5"
        }
    };

    var productXml = new ProductXml
    {
        Store = 1,
        Name = "Stuffed Animal",
        Price = 5.95m,
        Extended = extendedXml.Serialize()
    };

    db.ProductXmls.Add(productXml);
    db.SaveChanges();
}

The results of executing the Linq code from above will produce a new record like this:

If you click on the XML link in SQL, you should see something like this:

As you can see, there are two hard-coded POCO fields for the sale price and out of stock values.  These are not customer controlled fields.  These fields demonstrate that an enhancement could use the extended field to store new data without modifying the table.  The dictionary data contains one item called QuantityInWarehouse.  This is a customer designed field and these can be added through a data entry screen and maybe a meta data table that contains the names of extra data fields stored in the extended field.

XML allows flexible serialization, so if you add a field to the xml POCO, it will still de-serialize xml that does not contain that data (just make sure POCO field is nullable).

To see a working example, go to my GitHub account (see end of this article) and download the sample code.

You can use the following SQL query to extract data from the xml:

SELECT
    Extended.value('(/ExtendedXml/SalePrice)[1]', 'nvarchar(max)') as 'SalePrice',
    Extended.value('(/ExtendedXml/OutOfStock)[1]', 'nvarchar(max)') as 'OutOfStock', 
    Extended.value('(/ExtendedXml/ExtendedData/Key)[1]', 'nvarchar(max)') as 'Key',
    Extended.value('(/ExtendedXml/ExtendedData/Value)[1]', 'nvarchar(max)') as 'Value'
FROM 
    ProductXml

The query above should produce the following output:

Here are some disadvantages to using this technique:

  • It is difficult to query one extended field
  • Loading data is slow because the entire field of XML is loaded

Using Extended Table

This is the preferred example of designing an extended field system.  With this technique, data of any type can be added to any table without adding fields to the existing tables.  This technique does not break first normal form and forming a query is easy and powerful.  The idea behind this technique is to create two tables: The first table contains metadata describing the table and field to extend and the second table contains the actual value of the data stored.  Here’s an example of MetaDataDictionary table:

Here’s an example of the ExtendedData table:


A custom query can be formed to output all of the extended data for each record.  Here’s an example for the above data for the product table:

SELECT
	p.*,
	(SELECT e.value FROM ExtendedData e WHERE e.RecordId = p.Id AND e.MetaDataDictionaryId=1) AS 'SalePrice',
	(SELECT e.value FROM ExtendedData e WHERE e.RecordId = p.Id AND e.MetaDataDictionaryId=2) AS 'SoldOut'
FROM	
	Product p

This will produce:

To obtain data from one extended field, a simple query can be formed to lookup the value.  This leads to another bonus, the fact that Entity Framework and Linq can be used to query data that is organized in this fashion.  Why is this so important?  Because the use of EF and Linq allows all of the business logic to reside in code where it is executed by the front-end and it can be unit tested.  If there is a significant amount of code in a stored procedure, that code cannot be unit tested.

I’m going to list a few advantages of this method over the previous two methods:

  • Your implementation can have any number of extended fields
  • Any table in the system can be extended without modification to the database
  • Forming a query to grab one extended field value is simple

One thing to note about this method is that I’m storing the value in a varchar field.  You can change the size to accommodate any data stored.  You will need to perform some sort of data translation between the varchar and the actual data type you expect to store.  For example: If you are storing a date data type, then you might want some type checking when converting from varchar to the date expected.  The conversion might occur at the Linq level, or you might do it with triggers on the extended value table (though, I would avoid such a design, since it will probably chew up a lot of SQL CPU resources).

Where to Get the Code

You can find the sample code used by the xml extended field example at my GitHub account (click here).  The project contains a file named “SqlServerScripts.sql”.  You can copy this script to your local SQL server to generate the tables in a demo database and populate the data used by this blog post (saving you a lot of time).

 

Unit Testing EF Data With Moq

Introduction

I’ve discussed using the in-memory Entity Framework unit tests in a previous post (here).  In this post, I’m going to demonstrate a simple way to use Moq to unit test a method that uses Entity Framework Core.

Setup

For this sample, I used the POCOs, context and config files from this project (here).  You can copy the cs files from that project, or you can just download the sample project from GitHub at the end of this article.

You’ll need several parts to make your unit tests work:

  1. IOC container – Not in this post
  2. List object to DbSet Moq method
  3. Test data
  4. Context Interface

I found a method on stack overflow (here) that I use everywhere.  I created a unit test helper static object and placed it in my unit test project:

public class UnitTestHelpers
{
  public static DbSet<T> GetQueryableMockDbSet<T>(List<T> sourceList) where T : class
  {
    var queryable = sourceList.AsQueryable();

    var dbSet = new Mock<DbSet<T>>();
    dbSet.As<IQueryable<T>>().Setup(m => m.Provider).Returns(queryable.Provider);
    dbSet.As<IQueryable<T>>().Setup(m => m.Expression).Returns(queryable.Expression);
    dbSet.As<IQueryable<T>>().Setup(m => m.ElementType).Returns(queryable.ElementType);
    dbSet.As<IQueryable<T>>().Setup(m => m.GetEnumerator()).Returns(() => queryable.GetEnumerator());
    dbSet.Setup(d => d.Add(It.IsAny<T>())).Callback<T>(sourceList.Add);

    return dbSet.Object;
  }
}

The next piece is the pretend data that you will use to test your method.  You’ll want to keep this as simple as possible.  In my implementation, I allow for multiple data sets.

public static class ProductTestData
{
  public static List Get(int dataSetNumber)
  {
    switch (dataSetNumber)
    {
      case 1:
      return new List
      {
        new Product
        {
          Id=0,
          Store = 1,
          Name = "Cheese",
          Price = 2.5m
        },
        ...

      };
    }
    return null;
  }
}

Now you can setup a unit test and use Moq to create a mock up of your data and then call your method under test.  First, let’s take a look at the method and see what we want to test:

public class ProductList
{
  private readonly IDatabaseContext _databaseContext;

  public ProductList(IDatabaseContext databaseContext)
  {
    _databaseContext = databaseContext;
  }

  public List GetTopTen()
  {
    var result = (from p in _databaseContext.Products select p).Take(10).ToList();

    return result;
  }
}

The ProductList class will be setup from an IOC container.  It has a dependency on the databaseContext object.  That object will be injected by the IOC container using the class constructor.  In my sample code, I set up the class for this standard pattern.  For unit testing purposes, we don’t need the IOC container, we’ll just inject our mocked up context into the class when we create an instance of the object.

Let’s mock the context:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();

}

As you can see, Moq uses interfaces to create a mocked object.  This is the only line of code we need for the context mocking.  Next, we’ll mock some data.  We’re going to tell Moq to return data set 1 if the Products getter is called:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(1)));

}

I’m using the GetQueryableMockDbSet unit test helper method in order to convert my list objects into the required DbSet object.  Any time my method tries to read Products from the context, data set 1 will be returned.  This data set contains 12 items.  As you can see from the method that we are going to mock up, there should be only ten items returned.  Let’s add the method under test setup:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(1)));

  var productList = new ProductList(demoDataContext.Object);

  var result = productList.GetTopTen();
  Assert.Equal(10,result.Count);
}

The object under test is very basic, just get an instance and pass the mocked context (you have to use .Object to get the mocked object).  Next, just call the method to test.  Finally, perform an assert to conclude your unit test.  If the productList() method returns an amount that is not ten, then there is an issue (for this data set).  Now, we should test an empty set.  Add this to the test data switch statement:

case 2:
  return new List
  {
  };

Now the unit test:

[Fact]
public void TopTenProductList()
{
  var demoDataContext = new Mock<IDatabaseContext>();
  demoDataContext.Setup(x => x.Products).Returns(UnitTestHelpers.GetQueryableMockDbSet(ProductTestData.Get(2)));

  var productList = new ProductList(demoDataContext.Object);

  var result = productList.GetTopTen();
  Assert.Empty(result.Count);
}

All the work has been done to setup the static test data object, so I only had to add one case to it.  Then the unit test is identical to the previous unit test, except the ProductTestData.Get() has a parameter of 2, instead of 1 representing the data set number.  Finally, I changed the assert to test for an empty set instead of 10.  Execute the tests:

Now you can continue to add unit tests to test for different scenarios.

Where to Get the Code

You can go to my GitHub account and download the sample code (click here).  If you would like to create the sample tables to make this program work (you’ll need to add your own console app to call the GetTopTen() method), you can use the following MS SQL Server script:

CREATE TABLE [dbo].[Store](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Name] [varchar](50) NULL,
	[Address] [varchar](50) NULL,
	[State] [varchar](50) NULL,
	[Zip] [varchar](50) NULL,
 CONSTRAINT [PK_Store] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[Product](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Store] [int] NOT NULL,
	[Name] [varchar](50) NULL,
	[Price] [money] NULL,
 CONSTRAINT [PK_Product] PRIMARY KEY NONCLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[Product]  WITH CHECK ADD  CONSTRAINT [FK_store_product] FOREIGN KEY([Store])
REFERENCES [dbo].[Store] ([Id])
GO

ALTER TABLE [dbo].[Product] CHECK CONSTRAINT [FK_store_product]
GO
 

Creating POCOs in .Net Core 2.0

Summary

I’ve shown how to generate POCOs (Plain Old C# Objects) using the scaffold tool for .Net Core 1 in an earlier post.  Now I’m going to show how to do it in Visual Studio 2017 with Core 2.0.

Install NuGet Packages

First, you’ll need to install the right NuGet Packages.  I prefer to use the command line because I’ve been doing this so long that my fingers type the command without me thinking about it.  If you’re not comfortable with the command line NuGet window, you can use the NuGet Package Manager Settings window under the project you want to create your POCOs in.  If you want, you can copy the commands here and paste them into the NuGet Package Manager Console window.  Follow these instructions:

  1. Create a .Net Core 2.0 library project in Visual Studio 2017.
  2. Type or copy and paste the following NuGet commands into the Nuget Package Manager Console window:
install-package Microsoft.EntityFrameworkCore.SqlServer
install-package Microsoft.EntityFrameworkCore.Tools
install-package Microsoft.EntityFrameworkCore.Tools.DotNet

If you open up your NuGet Dependencies treeview, you should see the following:

Execute the Scaffold Command

In the same package manager console window use the following command to generate your POCOs:

Scaffold-DbContext "Data Source=YOURSQLINSTANCE;Initial Catalog=DATABASENAME;Integrated Security=True" Microsoft.EntityFrameworkCore.SqlServer -OutputDir POCODirectory

You’ll need to update the datasource and initial catalog to point to your database.  If the command executes without error, then you’ll see a directory named “POCODirectory” that contains cs files for each table in the database you just converted.  There will also be a context that contains all the model builder entity mappings.  You can use this file “as-is” or you can split the mappings into individual files.

My process consists of generating these files in a temporary project, followed by copying each table POCO that I want to use in my project.  Then I copy the model builder mappings for each table that I use in my project.

What This Does not Cover

Any views, stored procedures or functions that you want to access with Entity Framework will not show up with this tool.  You’ll still need to create the result POCO for views, stored procedures and functions by hand (or find a custom tool).  Using EF with stored procedures is not recommended.  Anyone who has to deal with legacy code and legacy database will run into a situation where they will need to interface with an existing stored procedure.

 

XML Serialization

Summary

In this post I’m going to demonstrate the proper way to serialize XML and setup unit tests using xUnit and .Net Core.  I will also be using Visual Studio 2017.

Generating XML

JSON is rapidly taking over as the data encoding standard of choice.  Unfortunately, government agencies are decades behind the technology curve and XML is going to be around for a long time to come.  One of the largest industries industries still using XML for a majority of their data transfer encoding is the medical industry.  Documents required by meaningful use are mostly encoded in XML.  I’m not going to jump into the gory details of generating a CCD.  Instead, I’m going to keep this really simple.

First, I’m going to show a method of generating XML that I’ve seen many times.  Usually coded by a programmer with little or no formal education in Computer Science.  Sometimes programmers just take a short-cut because it appears to be the simplest way to get the product out the door.  So I’ll show the technique and then I’ll explain why it turns out that this is a very poor way of designing an XML generator.

Let’s say for instance we wanted to generate XML representing a house.  First we’ll define the house as a record that can contain square footage.  That will be the only data point assigned to the house record (I mentioned this was going to be simple right).  Inside of the house record will be lists of walls and lists of roofs (assume a house could have two or more roofs like a tri-level configuration).  Next, I’m going to make a list of windows for the walls.  The window block will have a “Type” that is a free-form string input and the roof block will also have a “Type” that is a free-form string.  That is the whole definition.

public class House
{
  public List Walls = new List();
  public List Roofs = new List();
  public int Size { get; set; }
}

public class Wall
{
  public List Windows { get; set; }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

The “easy” way to create XML from this is to use the StringBuilder and just build XML tags around the data in your structure.  Here’s a sample of the possible code that a programmer might use:

public class House
{
  public List<Wall> Walls = new List<Wall>();
  public List<Roof> Roofs = new List<Roof>();
  public int Size { get; set; }

  public string Serialize()
  {
    var @out = new StringBuilder();

    @out.Append("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
    @out.Append("<House xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\">");

    foreach (var wall in Walls)
    {
      wall.Serialize(ref @out);
    }

    foreach (var roof in Roofs)
    {
      roof.Serialize(ref @out);
    }

    @out.Append("<size>");
    @out.Append(Size);
    @out.Append("</size>");

    @out.Append("</House>");

    return @out.ToString();
  }
}

public class Wall
{
  public List<Window> Windows { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    if (Windows == null || Windows.Count == 0)
    {
      @out.Append("<wall />");
      return;
    }

    @out.Append("<wall>");
    foreach (var window in Windows)
    {
      window.Serialize(ref @out);
    }
    @out.Append("</wall>");
  }
}

public class Window
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<window>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</window>");
  }
}

public class Roof
{
  public string Type { get; set; }

  public void Serialize(ref StringBuilder @out)
  {
    @out.Append("<roof>");
    @out.Append("<Type>");
    @out.Append(Type);
    @out.Append("</Type>");
    @out.Append("</roof>");
  }
}

The example I’ve given is a rather clean example.  I have seen XML generated with much uglier code.  This is the manual method of serializing XML.  One almost obvious weakness is that the output produced is a straight line of XML, which is not human-readable.  In order to allow human readable XML output to be produced with an on/off switch, extra logic will need to be incorporated that would append the newline and add tabs for indents.  Another problem with this method is that it contains a lot of code that is unnecessary.  One typo and the XML is incorrect.  Future editing is hazardous because tags might not match up if code is inserted in the middle and care is not taken to test such conditions.  Unit testing something like this is an absolute must.

The easy method is to use the XML serializer.  To produce the correct output, it is sometimes necessary to add attributes to properties in objects to be serialized.  Here is the object definition that produces the same output:

public class House
{
  [XmlElement(ElementName = "wall")]
  public List Walls = new List();

  [XmlElement(ElementName = "roof")]
  public List Roofs = new List();

  [XmlElement(ElementName = "size")]
  public int Size { get; set; }
}

public class Wall
{
  [XmlElement(ElementName = "window")]
  public List Windows { get; set; }

  public bool ShouldSerializenullable()
  {
    return Windows == null;
  }
}

public class Window
{
  public string Type { get; set; }
}

public class Roof
{
  public string Type { get; set; }
}

In order to serialize the above objects into XML, you use the XMLSerializer object:

public static class CreateXMLData
{
  public static string Serialize(this House house)
  {
    var xmlSerializer = new XmlSerializer(typeof(House));

    var settings = new XmlWriterSettings
    {
      NewLineHandling = NewLineHandling.Entitize,
      IndentChars = "\t",
      Indent = true
    };

    using (var stringWriter = new Utf8StringWriter())
    {
      var writer = XmlWriter.Create(stringWriter, settings);
      xmlSerializer.Serialize(writer, house);

      return stringWriter.GetStringBuilder().ToString();
    }
  }
}

You’ll also need to create a Utf8StringWriter Class:

public class Utf8StringWriter : StringWriter
{
  public override Encoding Encoding
  {
    get { return Encoding.UTF8; }
  }
}

Unit Testing

I would recommend unit testing each section of your XML.  Test with sections empty as well as containing one or more items.  You want to make sure you capture instances of null lists or empty items that should not generate XML output.  If there are any special attributes, make sure that the XML generated matches the specification.  For my unit testing, I stripped newlines and tabs to compare with a sample XML file that is stored in my unit test project.  As a first-attempt, I created a helper for my unit tests:

public static class XmlResultCompare
{
  public static string ReadExpectedXml(string expectedDataFile)
  {
    var assembly = Assembly.GetExecutingAssembly();
    using (var stream = assembly.GetManifestResourceStream(expectedDataFile))
    {
      using (var reader = new StreamReader(stream))
      {
        return reader.ReadToEnd().RemoveWhiteSpace();
      }
    }
  }

  public static string RemoveWhiteSpace(this string s)
  {
    s = s.Replace("\t", "");
    s = s.Replace("\r", "");
    s = s.Replace("\n", "");
  return s;
  }
}

If you look carefully, I ‘m compiling my xml test data right into the unit test dll.  Why am I doing that?  The company that I work for as well as most serious companies use continuous integration tools such as a build server.  The problem with a build server is that your files might not make it to the same directory location on the build server that they are on your PC.  To ensure that the test files are there, compile them into the dll and reference them from the namespace using Assembly.GetExecutingAssembly().  To make this work, you’ll have to mark your xml test files as an Embedded Resource (click on the xml file and change the Build Action property to Embedded Resource).  To access the files, which are contained in a virtual directory called “TestData”, you’ll need to use the name space, the virtual directory and the full file name:

XMLCreatorTests.TestData.XMLHouseOneWallOneWindow.xml

Now for a sample unit test:

[Fact]
public void TestOneWallNoWindow()
{
  // one wall, no windows
  var house = new House { Size = 2000 };
  house.Walls.Add(new Wall());

  Assert.Equal(XmlResultCompare.ReadExpectedXml("XMLCreatorTests.TestData.XMLHouseOneWallNoWindow.xml"), house.Serialize().RemoveWhiteSpace());
}

Notice how I filled in the house object with the size and added one wall.  The ReadExpectedXml() method will remove whitespaces automatically, so it’s important to remove them off the serialized version of house in order to match.

Where to Get the Code

As always you can go to my GitHub account and download the sample application (click here).  I would recommend downloading the application and modifying it as a test to see how all the piece work.  Add a unit test to see if you can match your expected xml with the xml serializer.

 

 

 

Mocking Your File System

Introduction

In this post, I’m going to talk about basic dependency injection and mocking a method that is used to access hardware.  The method I’ll be mocking is the System.IO.Directory.Exists().

Mocking Methods

One of the biggest headaches with unit testing is that you have to make sure you mock any objects that your method under test is calling.  Otherwise your test results could be dependent on something you’re not really testing.  As an example for this blog post, I will show how to apply unit tests to this very simple program:

class Program
{
    static void Main(string[] args)
    {
        var myObject = new MyClass();
        Console.WriteLine(myObject.MyMethod());
        Console.ReadKey();
    }
}

The object that is used above is:

public class MyClass
{
    public int MyMethod()
    {
        if (System.IO.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

Now, we want to create two unit tests to cover all the code in the MyMethod() method.  Here’s an attempt at one unit test:

[TestMethod]
public void test_temp_directory_exists()
{
    var myObject = new MyClass();
    Assert.AreEqual(3, myObject.MyMethod());
}

The problem with this unit test is that it will pass if your computer contains the c:\temp directory.  If your computer doesn’t contain c:\temp, then it will always fail.  If you’re using a continuous integration environment, you can’t control if the directory exists or not.  To compound the problem you really need test both possibilities to get full test coverage of your method.  Adding a unit test to cover the case where c:\temp to your test suite would guarantee that one test would pass and the other fail.

The newcomer to unit testing might think: “I could just add code to my unit tests to create or delete that directory before the test runs!”  Except, that would be a unit test that modifies your machine.  The behavior would destroy anything you have in your c:\temp directory if you happen to use that directory for something.  Unit tests should not modify anything outside the unit test itself.  A unit test should never modify database data.  A unit test should not modify files on your system.  You should avoid creating physical files if possible, even temp files because temp file usage will make your unit tests slower.

Unfortunately, you can’t just mock System.IO.Directory.Exists().  The way to get around this is to create a wrapper object, then inject the object into MyClass and then you can use Moq to mock your wrapper object to be used for unit testing only.  Your program will not change, it will still call MyClass as before.  Here’s the wrapper object and an interface to go with it:

public class FileSystem : IFileSystem
{
  public bool DirectoryExists(string directoryName)
  {
    return System.IO.Directory.Exists(directoryName);
  }
}

public interface IFileSystem
{
    bool DirectoryExists(string directoryName);
}

Your next step is to provide an injection point into your existing class (MyClass).  You can do this by creating two constructors, the default constructor that initializes this object for use by your method and a constructor that expects a parameter of IFileSystem.  The constructor with the IFileSystem parameter will only be used by your unit test.  That is where you will pass along a mocked version of your filesystem object with known return values.  Here are the modifications to the MyClass object:

public class MyClass
{
    private readonly IFileSystem _fileSystem;

    public MyClass(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public MyClass()
    {
        _fileSystem = new FileSystem();
    }

    public int MyMethod()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            return 3;
        }
        return 5;
    }
}

This is the point where your program should operate as normal.  Notice how I did not need to modify the original call to MyClass that occurred at the “Main()” of the program.  The MyClass() object will create a IFileSystem wrapper instance and use that object instead of calling System.IO.Directory.Exists().  The result will be the same.  The difference is that now, you can create two unit tests with mocked versions of IFileSystem in order to test both possible outcomes of the existence of “c:\temp”.  Here is an example of the two unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(3, myObject.MyMethod());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new MyClass(mockFileSystem.Object);
    Assert.AreEqual(5, myObject.MyMethod());
}

Make sure you include the NuGet package for Moq.  You’ll notice that in the first unit test, we’re testing MyClass with a mocked up version of a system where “c:\temp” exists.  In the second unit test, the mock returns false for the directory exists check.

One thing to note: You must provide a matching input on x.DirectoryExists() in the mock setup.  If it doesn’t match what is used in the method, then you will not get the results you expect.  In this example, the directory being checked is hard-coded in the method and we know that it is “c:\temp”, so that’s how I mocked it.  If there is a parameter that is passed into the method, then you can mock some test value, and pass the same test value into your method to make sure it matches (the actual test parameter doesn’t matter for the unit test, only the results).

Using an IOC Container

This sample is setup to be extremely simple.  I’m assuming that you have existing .Net legacy code and you’re attempting to add unit tests to the code.  Normally, legacy code is hopelessly un-unit testable.  In other words, it’s usually not worth the effort to apply unit tests because of the tightly coupled nature of legacy code.  There are situations where legacy code is not too difficult to add unit testing.  This can occur if the code is relatively new and the developer(s) took some care in how they built the code.  If you are building new code, you can use this same technique from the beginning, but you should also plan your entire project to use an IOC container.  I would not recommend refactoring an existing project to use an IOC container.  That is a level of madness that I have attempted more than once with many man-hours of wasted time trying to figure out what is wrong with the scoping of my objects.

If your code is relatively new and you have refactored to use contructors as your injection points, you might be able to adapt to an IOC container.  If you are building your code from the ground up, you need to use an IOC container.  Do it now and save yourself the headache of trying to figure out how to inject objects three levels deep.  What am I talking about?  Here’s an example of a program that is tightly coupled:

class Program
{
    static void Main(string[] args)
    {
        var myRootClass = new MyRootClass();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}
public class MyRootClass
{
  readonly ChildClass _childClass = new ChildClass();

  public bool CountExceeded()
  {
    if (_childClass.TotalNumbers() > 5)
    {
        return true;
    }
    return false;
  }

  public void Increment()
  {
    _childClass.IncrementIfTempDirectoryExists();
  }
}

public class ChildClass
{
    private int _myNumber;

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (System.IO.Directory.Exists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

The example code above is very typical legacy code.  The “Main()” calls the first object called “MyRootClass()”, then that object calls a child class that uses System.IO.Directory.Exists().  You can use the previous example to unit test the ChildClass for examples when c:\temp exist and when it doesn’t exist.  When you start to unit test MyRootClass, there’s a nasty surprise.  How to you inject your directory wrapper into that class?  If you have to inject class wrappers and mocked classes of every child class of a class, the constructor of a class could become incredibly large.  This is where IOC containers come to the rescue.

As I’ve explained in other blog posts, an IOC container is like a dictionary of your objects.  When you create your objects, you must create a matching interface for the object.  The index of the IOC dictionary is the interface name that represents your object.  Then you only call other objects using the interface as your data type and ask the IOC container for the object that is in the dictionary.  I’m going to make up a simple IOC container object just for demonstration purposes.  Do not use this for your code, use something like AutoFac for your IOC container.  This sample is just to show the concept of how it all works.  Here’s the container object:

public class IOCContainer
{
  private static readonly Dictionary<string,object> ClassList = new Dictionary<string, object>();
  private static IOCContainer _instance;

  public static IOCContainer Instance => _instance ?? (_instance = new IOCContainer());

  public void AddObject<T>(string interfaceName, T theObject)
  {
    ClassList.Add(interfaceName,theObject);
  }

  public object GetObject(string interfaceName)
  {
    return ClassList[interfaceName];
  }

  public void Clear()
  {
    ClassList.Clear();
  }
}

This object is a singleton object (global object) so that it can be used by any object in your project/solution.  Basically it’s a container that holds all pointers to your object instances.  This is a very simple example, so I’m going to ignore scoping for now.  I’m going to assume that all your objects contain no special dependent initialization code.  In a real-world example, you’ll have to analyze what is initialized when your objects are created and determine how to setup the scoping in the IOC container.  AutoFac has options of when the object will be created.  This example creates all the objects before the program starts to execute.  There are many reasons why you might not want to create an object until it’s actually used.  Keep that in mind when you are looking at this simple example program.

In order to use the above container, we’ll need to use the same FileSystem object and interface from the prevous program.  Then create an interface for MyRootObject and ChildObject.  Next, you’ll need to go through your program and find every location where an object is instantiated (look for the “new” command).  Replace those instances like this:

public class ChildClass : IChildClass
{
    private int _myNumber;
    private readonly IFileSystem _fileSystem = (IFileSystem)IOCContainer.Instance.GetObject("IFileSystem");

    public int TotalNumbers()
    {
        return _myNumber;
    }

    public void IncrementIfTempDirectoryExists()
    {
        if (_fileSystem.DirectoryExists("c:\\temp"))
        {
            _myNumber++;
        }
    }

    public void Clear()
    {
        _myNumber = 0;
    }
}

Instead of creating a new instance of FileSystem, you’ll ask the IOC container to give you the instance that was created for the interface called IFileSystem.  Notice how there is no injection in this object.  AutoFac and other IOC containers have facilities to perform constructor injection automatically.  I don’t want to introduce that level of complexity in this example, so for now I’ll just pretend that we need to go to the IOC container object directly for the main program as well as the unit tests.  You should be able to see the pattern from this example.

Once all your classes are updated to use the IOC container, you’ll need to change your “Main()” to setup the container.  I changed the Main() method like this:

static void Main(string[] args)
{
    ContainerSetup();

    var myRootClass = (IMyRootClass)IOCContainer.Instance.GetObject("IMyRootClass");
    myRootClass.Increment();

    Console.WriteLine(myRootClass.CountExceeded());
    Console.ReadKey();
}

private static void ContainerSetup()
{
    IOCContainer.Instance.AddObject<IChildClass>("IChildClass",new ChildClass());
    IOCContainer.Instance.AddObject<IMyRootClass>("IMyRootClass",new MyRootClass());
    IOCContainer.Instance.AddObject<IFileSystem>("IFileSystem", new FileSystem());
}

Technically the MyRootClass object does not need to be included in the IOC container since no other object is dependent on it.  I included it to demonstrate that all objects should be inserted into the IOC container and referenced from the instance in the container.  This is the design pattern used by IOC containers.  Now we can write the following unit tests:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object);

    var myObject = new ChildClass();
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(true,myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    IOCContainer.Instance.Clear();
    IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object);

    var myObject = new MyRootClass();
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

In these unit tests, we put the mocked up object used by the object under test into the IOC container.  I have provided a “Clear()” method to reset the IOC container for the next test.  When you use AutoFac or other IOC containers, you will not need the container object in your unit tests.  That’s because IOC containers like the one built into .Net Core and AutoFac use the constructor of the object to perform injection automatically.  That makes your unit tests easier because you just use the constructor to inject your mocked up object and test your object.  Your program uses the IOC container to magically inject the correct object according to the interface used by your constructor.

Using AutoFac

Take the previous example and create a new constructor for each class and pass the interface as a parameter into the object like this:

private readonly IFileSystem _fileSystem;

public ChildClass(IFileSystem fileSystem)
{
    _fileSystem = fileSystem;
}

Instead of asking the IOC container for the object that matches the interface IFileSystem, I have only setup the object to expect the fileSystem object to be passed in as a parameter to the class constructor.  Make this change for each class in your project.  Next, change your main program to include AutoFac (NuGet package) and refactor your IOC container setup to look like this:

static void Main(string[] args)
{
    IOCContainer.Setup();

    using (var myLifetime = IOCContainer.Container.BeginLifetimeScope())
    {
        var myRootClass = myLifetime.Resolve<IMyRootClass>();

        myRootClass.Increment();

        Console.WriteLine(myRootClass.CountExceeded());
        Console.ReadKey();
    }
}

public static class IOCContainer
{
    public static IContainer Container { get; set; }

    public static void Setup()
    {
        var builder = new ContainerBuilder();

        builder.Register(x => new FileSystem())
            .As<IFileSystem>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new ChildClass(x.Resolve<IFileSystem>()))
            .As<IChildClass>()
            .PropertiesAutowired()
            .SingleInstance();

        builder.Register(x => new MyRootClass(x.Resolve<IChildClass>()))
            .As<IMyRootClass>()
            .PropertiesAutowired()
            .SingleInstance();

        Container = builder.Build();
    }
}

I have ordered the builder.Register command from innner most to the outer most object classes.  This is not really necessary since the resolve will not occur until the IOC container is called by the object to be used.  In other words, you can define the MyRootClass first, followed by FileSystem and ChildClass, or in any order you want.  The Register command is just storing your definition of which physical object will be represented by each interface and which dependencies it will depend on.

Now you can cleanup your unit tests to look like this:

[TestMethod]
public void test_temp_directory_exists()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(1, myObject.TotalNumbers());
}

[TestMethod]
public void test_temp_directory_missing()
{
    var mockFileSystem = new Mock<IFileSystem>();
    mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false);

    var myObject = new ChildClass(mockFileSystem.Object);
    myObject.IncrementIfTempDirectoryExists();
    Assert.AreEqual(0, myObject.TotalNumbers());
}

[TestMethod]
public void test_root_count_exceeded_true()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(12);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(true, myObject.CountExceeded());
}

[TestMethod]
public void test_root_count_exceeded_false()
{
    var mockChildClass = new Mock<IChildClass>();
    mockChildClass.Setup(x => x.TotalNumbers()).Returns(1);

    var myObject = new MyRootClass(mockChildClass.Object);
    myObject.Increment();
    Assert.AreEqual(false, myObject.CountExceeded());
}

Do not include the AutoFac NuGet package in your unit test project.  It’s not needed.  Each object is isolated from all other objects.  You will still need to mock any injected objects, but the injection occurs at the constructor of each object.  All dependencies have been isolated so you can unit test with ease.

Where to Get the Code

As always, I have posted the sample code up on my GitHub account.  This project contains four different sample projects.  I would encourage you to download each sample and experiment/practice with them.  You can download the samples by following the links listed here:

  1. MockingFileSystem
  2. TightlyCoupledExample
  3. SimpleIOCContainer
  4. AutoFacIOCContainer
 

.Net MVC Project with AutoFac, SQL and Redis Cache

Summary

In this blog post I’m going to demonstrate a simple .Net MVC project that uses MS SQL server to access data.  Then I’m going to show how to use Redis caching to cache your results to reduce the amount of traffic hitting your database.  Finally, I’m going to show how to use the AutoFac IOC container to tie it all together and how you can leverage inversion of control to to break dependencies and unit test your code.

AutoFac

The AutoFac IOC container can be added to any .Net project using the NuGet manager.  For this project I created an empty MVC project and added a class called AutofacBootstrapper to the App_Start directory.  The class contains one static method called Run() just to keep it simple.  This class contains the container builder setup that is described in the instructions for AutoFac Quick Start: Quick Start.

Next, I added .Net library projects to my solution for the following purposes:

BusinessLogic – This will contain the business classes that will be unit tested.  All other projects will be nothing more than wire-up logic.

DAC – Data-tier Application.

RedisCaching – Redis backed caching service.

StoreTests – Unit testing library

I’m going to intentionally keep this solution simple and not make an attempt to break dependencies between dlls.  If you want to break dependencies between modules or dlls, you should create another project to contain your interfaces.  For this blog post, I’m just going to use the IOC container to ensure that I don’t have any dependencies between objects so I can create unit tests.  I’m also going to make this simple by only providing one controller, one business logic method and one unit test.

Each .Net project will contain one or more objects and each object that will be referenced in the IOC container must use an interface.  So there will be the following interfaces:

IDatabaseContext – The Entity Framework database context object.

IRedisConnectionManager – The Redis connection manager provides a pooled connection to a redis server.  I’ll describe how to install Redis for windows so you can use this.

IRedisCache – This is the cache object that will allow the program to perform caching without getting into the ugly details of reading and writing to Redis.

ISalesProducts – This is the business class that will contain one method for our controller to call.

Redis Cache

In the sample solution there is a project called RedisCaching.  This contains two classes: RedisConnectionManager and RedisCache.  The connection manager object will need to be setup in the IOC container first.  That needs the Redis server IP address, which would normally be read from a config file.  In the sample code, I fed the IP address into the constructor at the IOC container registration stage.  The second part of the redis caching is the actual cache object.  This uses the connection manager object and is setup in the IOC container next, using the previously registered connection manager as a paramter like this:

builder.Register(c => new RedisConnectionManager("127.0.0.1"))
    .As<IRedisConnectionManager>()
    .PropertiesAutowired()
    .SingleInstance();

builder.Register(c => new RedisCache(c.Resolve<IRedisConnectionManager>()))
    .As<IRedisCache>()
    .PropertiesAutowired()
    .SingleInstance();

In order to use the cache, just wrap your query with syntax like this:

return _cache.Get("ProductList", 60, () =>
{
  return (from p in _db.Products select p.Name);
});

The code between the { and } represents the normal EF linq query.  This must be returned to the anonymous function call: ( ) =>

The cache key name in the example above is “ProductList” and it will stay in the cache for 60 minutes.  The _cache.Get() method will check the cache first, if the data is there, then it returns the data and moves on.  If the data is not in the cache, then it calls the inner function, causing the EF query to be executed.  The result of the query is then saved to the cache server and then the result is returned.  This guarantees that the next query in less than 60 minutes will be in the cache for direct retrieval.  If you dig into the Get() method code you’ll notice that there are multiple try/catch blocks that will error out if the Redis server is down.  For a situation where the server is down, the inner query will be executed and the result will be returned.  In a production situation your system would run a bit slower and you’ll notice your database is working harder, but the system keeps running.

A precompiled version of Redis for Windows can be downloaded from here: Service-Stack Redis.  Download the files into a directory on your computer (I used C:\redis) then you can open a command window and navigate into your directory and use the following command to setup a windows service:

redis-server –-service-install

Please notice that there are two “-” in front of the “service-install” instruction.  Once this is setup, then Redis will start every time you start your PC.

The Data-tier

The DAC project contains the POCOs, the fluent configurations and the context object.  There is one interface for the context object and that’s for AutoFac’s use:

builder.Register(c => new DatabaseContext("Server=SQL_INSTANCE_NAME;Initial Catalog=DemoData;Integrated Security=True"))
    .As<IDatabaseContext>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

The context string should be read from the configuration file before being injected into the constructor shown above, but I’m going to keep this simple and leave out the configuration pieces.

Business Logic

The business logic library is just one project that contains all the complex classes and methods that will be called by the API.  In a large application you might have two or more business logic projects.  Typically though, you’ll divide your application into independent APIs that will each have their own business logic project as well as all the other wire-up projects shown in this example.  By dividing your application by function you’ll be able to scale your services according to which function uses the most resources.  In summary, you’ll put all the complicated code inside this project and your goal is to apply unit tests to cover all combination of features that this business logic project will contain.

This project will be wired up by AutoFac as well and it needs the caching and the data tier to be established first:

builder.Register(c => new SalesProducts(c.Resolve<IDatabaseContext>(), c.Resolve<IRedisCache>()))
    .As<ISalesProducts>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

As you can see the database context and the redis caching is injected into the constructor of the SalesProjects class.  Typically, each class in your business logic project will be registered with AutoFac.  That ensures that you can treat each object independent of each other for unit testing purposes.

Unit Tests

There is one sample unit test that performs a test on the SalesProducts.Top10ProductNames() method.  This test only tests the instance where there are more than 10 products and the expected count is going to be 10.  For effective testing, you should test less than 10, zero, and exactly 10.  The database context is mocked using moq.  The Redis caching system is faked using the interfaces supplied by StackExchange.  I chose to setup a dictionary inside the fake object to simulate a cached data point.  There is no check for cache expire, this is only used to fake out the caching.  Technically, I could have mocked the caching and just made it return whatever went into it.  The fake cache can be effective in testing edit scenarios to ensure that the cache is cleared when someone adds, deletes or edits a value.  The business logic should handle cache clearing and a unit test should check for this case.

Other Tests

You can test to see if the real Redis cache is working by starting up SQL Server Management Studio and running the SQL Server Profiler.  Clear the profiler, start the MVC application.  You should see some activity:

Then stop the MVC program and start it again.  There should be no change to the profiler because the data is coming out of the cache.

One thing to note, you cannot use IQueryable as a return type for your query.  It must be a list because the data read from Redis is in JSON format and it’s de-serialized all at once.  You can de-searialize and serialize into a List() object.  I would recommend adding a logger to the cache object to catch errors like this (since there are try/catch blocks).

Another aspect of using an IOC container that you need to be conscious of is the scope.  This can come into play when you are deploying your application to a production environment.  Typically developers do not have the ability to easily test multi-user situations, so an object that has a scope that is too long can cause cross-over data.  If, for instance, you set your business logic to have a scope of SingleInstance() and then you required your list to be special to each user accessing your system, then you’ll end up with the data of the first person who accessed the API.  This can also happen if your API receives an ID to your data for each call.  If the object only reads the data when the API first starts up, then you’ll have a problem.  This sample is so simple that it only contains one segment of data (top 10 products).  It doesn’t matter who calls the API, they are all requesting the same data.

Other Considerations

This project is very minimalist, therefore, the solution does not cover a lot of real-world scenarios.

  • You should isolate your interfaces by creating a project just for all the interface classes.  This will break dependencies between modules or dlls in your system.
  • As I mentioned earlier, you will need to move all your configuration settings into the web.config file (or a corresponding config.json file).
  • You should think in terms of two or more instances of this API running at once (behind a load-balancer).  Will there be data contention?
  • Make sure you check for any memory leaks.  IOC containers can make your code logic less obvious.
  • Be careful of initialization code in an object that is started by an IOC container.  Your initialization might occur when you least expect it to.

Where to Get The Code

You can download the entire solution from my GitHub account by clicking here.  You’ll need to change the database instance in the code and you’ll need to setup a redis server in order to use the caching feature.  A sql server script is provided so you can create a blank test database for this project.

 

DotNet Core vs. NHibernate vs. Dapper Smackdown!

The Contenders

Dapper

Dapper is a hybrid ORM.  This is a great ORM for those who have a lot of ADO legacy code to convert.  Dapper uses SQL queries and parameters can be used just like ADO, but the parameters to a query can be simplified into POCOs.  Select queries in Dapper can also be translated into POCOs.  Converting legacy code can be accomplished in steps because the initial pass of conversion from ADO is to add Dapper, followed by a step to add POCOs, then to change queries into LINQ (if desired).  The speed difference in my tests show that Dapper is better than my implementation of ADO for select queries, but slower for inserts and updates.  I would expect ADO to perform the best, but there is probably a performance penalty for using the data set adapter instead of the straight sqlCommand method.

If you’re interested in Dapper you can find information here: Stack Exchange/Dapper.   Dapper has a NuGet package, which is the method I used for my sample program.

ADO

I rarely use ADO these days, with the exception of legacy code maintenance or if I need to perform some sort of bulk insert operation for a back-end system.  Most of my projects are done in Entity Framework, using the .Net Core or the .Net version.  This comparison doesn’t feel complete without including ADO, even though my smackdown series is about ORM comparisons.  So I assembled a .Net console application with some ADO objects and ran a speed test with the same data as all the ORM tests.

NHibernate

NHiberate is the .Net version of Hibernate.  This is an ORM that I used at a previous company that I worked for.  At the time it was faster than Entity Framework 6 by a large amount.  The .Net Core version of Entity Framework has fixed the performance issues of EF and it no longer makes sense to use NHibernate.  I am providing the numbers in this test just for comparison purposes.  NHibernate is still faster than ADO and Dapper for everything except the select.  Both EF-7 and NHibernate are so close in performance that I would have to conclude that they are the same.  The version of NHibernate used for this test is the latest version as of this post (version 4.1.1 with fluent 2.0.3).

Entity Framework 7 for .Net Core

I have updated the NuGet packages for .Net Core for this project and re-tested the code to make sure the performance has not changed over time.  The last time I did a smackdown with EF .Net Core I was using .Net Core version 1.0.0, now I’m using .Net Core 1.1.1.  There were not measurable changes in performance for EF .Net Core.

The Results

Here are the results side-by-side with the .ToList() method helper and without:

Test for Yourself!

First, you can download the .Net Core version by going to my GitHub account here and downloading the source.  There is a SQL script file in the source that you can run against your local MS SQL server to setup a blank database with the correct tables.  The NHibernate speed test code can also be downloaded from my GitHub account by clicking here. The ADO version is here.  Finally, the Dapper code is here.  You’ll want to open the code and change the database server name.