Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase(databaseName: "Test Database");
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

1 thought on “Dot Net Core In Memory Unit Testing Using xUnit

Leave a Reply