EntityFramework .Net Core Basics

In this post I’m going to describe the cleanest way to setup your Entity Framework code in .Net Core.  For this example, I’m going to create the database first, then I’ll run a POCO generator to generate the Plain Old C# Objects and their mappings.  Finally, I’ll put it together in a clean format.

The Database

Here is the SQL script you’ll need for the sample database.  I’m keeping this as simple as possible.  There are only two tables with one foreign key constraint between the two.  The purpose of the tables becomes obvious when you look at the table names.  This sample will be working with stores and the products that are in the stores.  Before you start, you’ll need to create a SQL database named DemoData.  Then execute this script to create the necessary test data:

delete from Product
delete from Store
go
DBCC CHECKIDENT ('[Store]', RESEED, 0);
DBCC CHECKIDENT ('[Product]', RESEED, 0);
go
insert into store (name) values ('ToysNStuff')
insert into store (name) values ('FoodBasket')
insert into store (name) values ('FurnaturePlus')

go
insert into product (Store,Name,price) values (1,'Doll',5.99)
insert into product (Store,Name,price) values (1,'Tonka Truck',18.99)
insert into product (Store,Name,price) values (1,'Nerf Gun',12.19)
insert into product (Store,Name,price) values (2,'Bread',2.39)
insert into product (Store,Name,price) values (1,'Cake Mix',4.59)
insert into product (Store,Name,price) values (3,'Couch',235.97)
insert into product (Store,Name,price) values (3,'Bed',340.99)
insert into product (Store,Name,price) values (3,'Table',87.99)
insert into product (Store,Name,price) values (3,'Chair',45.99)

POCO Generator

Next, you can follow the instructions at this blog post: Scaffolding ASP.Net Core MVC

There are a few tricks to making this work:

  1. Make sure you’re using Visual Studio 2015.  This example does not work for Visual Studio 2017.
  2. Create a throw-away project as a .Net Core Web Application.  Make sure you use “Web Application” and not “Empty” or “Web API”.

You can navigate to the directory, then open a command window, copy the directory from the top of the explorer window, type “cd ” in the command window and paste the directory there.  Then hit enter and you’ll be in the correct directory.  You’ll need to be inside the directory that contains the project.json file where you copied the NuGet package text (see article at link).  The dotnet ef command will not work outside that directory.  You’ll end up with a command line like this:

dotnet ef dbcontext scaffold "Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer --output-dir Models

Once you have generated your POCOs, you’ll have a directory named Models inside the .Net Core MVC project:

 

 

If you look at Product.cs or Store.cs, you’ll see the POCO objects, like this:

public partial class Product
{
    public int Id { get; set; }
    public int Store { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }

    public virtual Store StoreNavigation { get; set; }
}

Next, you’ll find a context that is generated for you.  That context contains all your mappings:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.Price).HasColumnType("money");
            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });

        modelBuilder.Entity<Store>(entity =>
        {
            entity.Property(e => e.Address).HasColumnType("varchar(50)");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.State).HasColumnType("varchar(50)");
            entity.Property(e => e.Zip).HasColumnType("varchar(50)");
        });
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

At this point, you can copy this code into your desired project and it’ll work as is.  It’s OK to leave it in this format for a small project, but this format will become cumbersome if your project grows beyond a few tables.  I would recommend manually breaking up your mappings to individual source files.  Here is an example of two mapping files for the mappings of each table defined in this sample:

public static class StoreConfig
{
  public static void StoreMapping(this ModelBuilder modelBuilder)
  {
    modelBuilder.Entity<Store>(entity =>
    {
        entity.ToTable("Store");
        entity.Property(e => e.Address).HasColumnType("varchar(50)");
        entity.Property(e => e.Name).HasColumnType("varchar(50)");
        entity.Property(e => e.State).HasColumnType("varchar(50)");
        entity.Property(e => e.Zip).HasColumnType("varchar(50)");
    });
  }
}

And the product config file:

public static class ProductConfig
{
    public static void ProductMapping(this ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.ToTable("Product");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");

            entity.Property(e => e.Price).HasColumnType("money");

            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });
    }
}

Then change your context to this:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.StoreMapping();
        modelBuilder.ProductMapping();
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

Notice how the complexity of the mappings is moved out of the context definition and into individual configuration files.  It’s best to create a configuration directory to put those files in and further divide your config files from your POCO source files.  I also like to move my context source file out to another directory or project.  An example of a directory structure is this:

 

 

 

You can name your directories any name that suits your preferences.  I’ve seen DAC used as Data Access Control, Sometimes Repository makes more sense.  As you can see in my structure above, I grouped all of my database source files in the DAC directory, then the Configuration sub-directory contains mappings, the Domain directory has all of my POCOs and finally I left the context in the DAC directory itself.  Another technique is to split these into different projects and use an IOC container to glue it all together.  You can use that technique to break dependencies between dlls that are generated by your project.

 

Dot Net Core and NuGet Packages

One of the most frustrating changes made in .Net Core is the NuGet package manager.  I like the way the new package manager works, unfortunately, there are still a lot of bugs in the way the package manager works.  I call it the tyranny of intelligent software.  The software is supposed to do all the intelligent work for you, leaving you to do your work as a developer and create your product.  Unfortunately, one or more bugs cause errors to occur and then you have to out-think the smart software and try and figure out what it was supposed to do.  When smart software works, it’s magical.  When it doesn’t work, life as a developer can be miserable.

I’m going to show you how the NuGet manager works in .Net Core and I’ll show you some tricks you’ll need to get around problems that might arise.  I’m currently using Visual Stdio 2015 Community.

The Project.json File

One of the great things about .Net Core is the new project.json file.  You can literally type or paste in the name of the NuGet package you want to include in your project and it will synchronize all the dlls that are needed for that package.  If look real close, you’ll see a triangle next to the file.  There is another file that is automatically maintained by the package manager called the project.lock.json file.  This file is excluded from TFS check-in because it can be automatically re-generated from the project.json file.  You can open the file up and observe the thousands of lines of json data that is stuffed into this file.  Sometimes this file contains old versions of dlls, especially if you created your project months ago and now you want to update your NuGet packages.  If your dependencies section in your project.json file are all flagged as errors, there could be a conflict in the lock file.  You can hold your cursor over any NuGet package and see what the error is, but sometimes that is not very helpful.

To fix this issue, you can regenerate the lock file.  Just delete the file from the solution explorer.  Visual studio should automatically restore the file.  If not, then open up your package manager console window.  It should be at the bottom of visual studio, or you can go to “Tools -> NuGet Package Manager -> Package Manager Console”.    Type “dotnet restore” in the console window and wait for it to complete.

The NuGet Local Cache

When the package manager brings in packages from the Internet, it keeps a copy of each package in a cache.  This is the first place where the package manager will look for a package.  If you use the same package in another project, you’ll notice that it doesn’t take as much time to install it as it did the first time.  The directory is under your user directory.  Go to c:\Users, then find the directory with your user name (the name you’re currently logged in as, or the user name that was setup for your computer when you installed your OS).  Then you’ll see a folder named “.nuget” Open that folder and drill down into “packages”.  You should see about a billion folders with packages that you’ve installed since you started using .Net Core.  You can select all of these and delete them.  Then you can go back to your solution and restore packages.  It’ll take longer than normal to restore all your packages because they must be read from the Internet first.

An easier method to clearing this cache is to go to the package manager console and type in:

nuget locals all -clear

If you have your .nuget/packages folder open, you’ll see all the sub directories disappear.

If the nuget command does not work in Visual Studio, you’ll have to download the NuGet.exe file from here.  Get the recommended latest.  Then search for your nuget execution directory.  For VS2015 it is:

C:\Program Files (x86)\NuGet\Visual Studio 2015

Drop the EXE file in that directory (There is probably already a vsix file in there).  Then make sure that your system path contains the directory.  I use Rapid Environment Editor to edit my path, you can download and install that application from here.  Once you have added to your PATH, then exit Visual Studio and start it back up again.  Now your “nuget” command should work in the package manager console command line.

NuGet Package Sources

If you look at the package console window you’ll see a drop-down that normally shows “nuget.org”.  There is a gear icon next to the drop-down.  Click it and you’ll see the “Package Sources” window.  This window has the list of locations where NuGet packages will be searched for.  You can add your own local packages if you would like, and then add a directory to this list.  You can also update the list with urls that are shown at the NuGet site.  Go to www.nuget.org and look for the “NuGet Feed Locations” header.  Below that is a list of urls that you can put into the package sources window.  As of this blog post, there are two URLs:

Sometimes you’ll get an error when the package manager attempts to update your packages.  If this occurs, it could be due to a broken url to a package site.  There is little you can do about the NuGet site.  If it’s down, you’re out.  Fortunately, that’s a rare event.  For local package feeds, you can temporarily turn them off (assuming your project doesn’t use any packages from your local site).  To turn off one feed, you can go to the “Package Sources” window and just uncheck the check box.  Just selecting one package feed from the drop-down does not prevent the package manager from checking and failing from a bad local feed.

Restart Visual Studio

One other trick that I’ve learned, is to restart Visual Studio.  Sometimes the package manager just isn’t behaving itself.  It can’t seem to find any packages and your project has about 8,000 errors consisting of missing dependencies.  In this instance, I’ll clear the local cache and then close Visual Studio.  Then re-open Visual Studio with my solution and perform a package restore.

Package Dependency Errors

Sometimes there are two or more versions of the same package in your solution.  This can cause dependency errors that are tough to find.  You’ll get a dependency error in one project that has a package version that is newer than the same package in another project that your current project is dependent on.

To find these problems, you can right click on the solution and select “Manage NuGet Packages for Solution…” then click on each package name and look at the check boxes on the right.  If you see two different versions, update all projects to the same version:

Finally

I hope these hints save you a lot of time and effort when dealing with packages.  The problems that I’ve listed here have appeared in my projects many times.  I’m confident you’ll run into them as well.  Be sure and hit the like button if you found this article to be helpful.

 

DBContextOptionsBuilder does not contain a definition for ‘UseSqlServer’

Attempting to use the correct NuGet packages for your code in .Net Core can be challenging.  In this instance there is no project.json error and yet this one property is missing:

This will happen when your EF database project contains at least these two NuGet packages:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

What’s missing is the sql server package.  It took some trial and error to find the right version:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "Microsoft.EntityFrameworkCore.SqlServer": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

The easiest way to find the latest version of your packages is to delete from the first digit decimal to the end of the version number and type a “.”:

As you can see from the drop-down that appears, version 1.1.1 is the latest current version (by the time you read this, there could be a newer version).  When I was attempting to fix this problem, there were a lot of forums indicating that the person needed to add “using Microsoft.Data.Entity;” but that’s not the solution in this instance.

I’m posting this on my blog so I have a reference if I run into this problem again.  Hopefully this will help those who got stuck on this crazy minor issue and can’t find a working solution.

 

Dot Net Core Using the IOC Container

I’ve talked about Inversion Of Control in previous posts, but I’m going to go over it again.  If you’re new to IOC containers, breaking dependencies and unit testing, then this is the blog post you’ll want to read.  So let’s get started…

Basic Concept of Unit Testing

Developing and maintaining software is one of the most complex tasks ever performed by humans.  Software can grow to proportions that cannot be understood by any one person at a time.  To compound the issue of maintaining and enhancing code, there is the problem that one small change in code can affect the operation of something that seems unrelated.  Engineers that build something physical, like say a jumbo jet can identify a problem and fix it.  They usually don’t expect a problem with the wing to affect the passenger seats.  In software, all bets are off.  So there needs to be a way to test everything when a small change is made.

The reason you want to create a unit test is to put in place a tiny automatic regression test.  This test is executed every time you change code to add an enhancement.  If you change some code, the test runs and ensures that you didn’t break a feature that you already coded and tested previously.  Each time you add one feature, you add a unit test.  Eventually, you end up with a collection of unit tests covering each combination of features used by your software.  These tests ride along with your source code forever.  Ideally, you want to always regression test every piece of logic that you’ve written.  In theory this will prevent you from breaking existing code when you add a new enhancement.

To ensure that you are unit testing properly, you need to understand coverage.  Coverage is not everything, but it’s a measurement of how much of your code is covered by your unit tests and you should strive to maximize this.  There are tools that can measure this for you, though some are expensive.  One aspect of coverage that you need to be aware of is the combination “if” statement:

if (input == 'A' || input =='B')
{
    // do something
}

This is a really simple example, but your unit test suite might contain a test that feeds the character A into the input and you’ll get coverage for the inner part of the if statement.  However, you have not tested when the input is B and that input might be used by other logic in a slightly different way.  Technically, we don’t have 100% coverage.  I just want you to be aware that this issue exists and you might need to do some analysis of your code coverage when you’re creating unit tests.

One more thing about unit tests and this is very important to keep in mind.  When you deploy this software and bugs are reported, you will need to add a unit test for each bug reported.  The unit test must break your code exactly the way the bug did.  Then you fix the bug and that prevents any other developer from undoing your bug fix.  Of course, your bug fix will be followed by another unit test suite run to make sure you didn’t break any thing else.  This will help you make forward progress in your quest for bug-free or low-bug software.

Dependencies

So you’ve learned the basics of unit test writing and you’re creating objects and and putting one or more unit tests on each method.  Suddenly you run into an issue.  Your object connects to a device for input.  An example is that you read from a text file or you connect to a database to read and write data.  Your unit test should never cause files to be written or data to be written to a real database.  It’s slow, the data being written would need to be cleaned out when the test completed.  What if the tests fail?  Your test data might still be in the database.  Even if you setup a test database, you would not be able to run two versions of your unit tests at the same time (think of two developers executing their local copy of the unit test suite).

The device being used is called a dependency.  The object depends on the device and it cannot operate properly without the device.  To get around dependencies, we need to create a fake or mock database or a fake file I/O object to put in place of the real database or file I/O when we run our unit tests.  The problem is that we need to somehow tell the object under test to use the fake or mock instead of the real thing.  The object must also default to the real database or file I/O when not under test.

The current trend in breaking dependencies involves a technique called Inversion Of Control or IOC.  What IOC does is allow us to define all object create points at program startup time.  When unit tests are run, we substitute the objects that perform database and I/O functions with fakes.  Then we call our objects under test and the IOC system takes care of wiring the correct dependencies together.  Sounds easy.

IOC Container Basics

Here are the basics of how an IOC container works.  I’m going to cut out all the complications involved and keep this super simple.

First, there’s the container.  This is a dictionary of interfaces and classes that is used as a lookup.  Basically, you create your object and then you create a matching interface for your object.  When you call one object from another, you use the interface to lookup which class to call from your object.  Here’s a diagram of object A dependent on object B:

Here’s a tiny code sample:

public class A
{
  public void MyMethod()
  {
    var b = new B();

    b.DependentMethod();
    }
}

public class B
{
  public void DependentMethod()
  {
    // do something here
  }
}

As you can see, class B is created inside class A.  To break the dependency we need to create an interface for each class and add them to the container:

public interface IB
{
  void DependentMethod();
}

public interface IA
{
  void MyMethod();
}

Inside Program.cs:

var serviceProvider = new ServiceCollection()
  .AddSingleton<IB, B>()
  .AddSingleton<IA, A>()
  .BuildServiceProvider();

var a = serviceProvider.GetService<IA>();
a.MyMethod();

Then modify the existing objects to use the interfaces and provide for the injection of B into object A:

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public void MyMethod()
  {
    _b.DependentMethod();
  }
}

public class B : IB
{
  public void DependentMethod()
  {
    // do something here
  }
}

The service collection object is where all the magic occurs.  This object is filled with definitions of which interface will be matched with which class.  As you can see by the insides of class A, there is no more reference to the class B anywhere.  Only the interface is used to reference any object that is passed (called injected) into the constructor that conforms to IB (interface B).  The service collection will lookup IB and see that it needs to create an instance of B and pass that along.  When the MyMethod() is executed in A, it just calls the _b.DependendMethod() method without worrying about the actual instance of _b.  What does that do for us when we are unit testing?  Plenty.

Mocking an Object

Now I’m going to use a NuGet package called Moq.  This framework is exactly what we need because it can take an interface and create a fake object that we can apply simulated outputs to.  First, lets modify our A and B class methods to return some values:

public class B : IB
{
  public int DependentMethod()
  {
    return 5;
  }
}

public interface IB
{
  int DependentMethod();
}

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public int MyMethod()
  {
    return _b.DependentMethod();
  }
}

public interface IA
{
  int MyMethod();
}

I have purposely kept this so simple that there’s nothing being done.  As you can see, DependentMethod() just returns the number 5 in real life.  Your methods might perform a calculation and return the result, or you might have a random number generator or it’s a value read from your database.  This example just returns 5 and we don’t care about that because our mock object will return any value we want for the unit test being written.

Now the unit test using Moq looks like this:

[Fact]
public void ClassATest1()
{
    var mockedB = new Mock<IB>();
    mockedB.Setup(b => b.DependentMethod()).Returns(3);

    var a = new A(mockedB.Object);

    Assert.Equal(3, a.MyMethod());
}

The first line of the test creates a mock of object B called “mockedB”.  The next line creates a fake return for any call to the DependentMethod() method.  Next, we create an instance of class A (the real class) and inject the mocked B object into it.  We’re not using the container for the unit test because we don’t need to.  Technically, we could create a container and put the mocked B object into one of the service collection items, but this is simpler.  Keep your unit tests as simple as possible.

Now that there is an instance of class A called “a”, we can assert to test if a.MyMethod() returns 3.  If it does, then we know that the mocked object was called by object “a” instead of a real object of class A (since that always returns a 5).

Where to Get the Code

As always you can get the latest code used by this blog post at my GitHub account by clicking here.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.