DotNet Core vs. NHibernate vs. Dapper Smackdown!

The Contenders

Dapper

Dapper is a hybrid ORM.  This is a great ORM for those who have a lot of ADO legacy code to convert.  Dapper uses SQL queries and parameters can be used just like ADO, but the parameters to a query can be simplified into POCOs.  Select queries in Dapper can also be translated into POCOs.  Converting legacy code can be accomplished in steps because the initial pass of conversion from ADO is to add Dapper, followed by a step to add POCOs, then to change queries into LINQ (if desired).  The speed difference in my tests show that Dapper is better than my implementation of ADO for select queries, but slower for inserts and updates.  I would expect ADO to perform the best, but there is probably a performance penalty for using the data set adapter instead of the straight sqlCommand method.

If you’re interested in Dapper you can find information here: Stack Exchange/Dapper.   Dapper has a NuGet package, which is the method I used for my sample program.

ADO

I rarely use ADO these days, with the exception of legacy code maintenance or if I need to perform some sort of bulk insert operation for a back-end system.  Most of my projects are done in Entity Framework, using the .Net Core or the .Net version.  This comparison doesn’t feel complete without including ADO, even though my smackdown series is about ORM comparisons.  So I assembled a .Net console application with some ADO objects and ran a speed test with the same data as all the ORM tests.

NHibernate

NHiberate is the .Net version of Hibernate.  This is an ORM that I used at a previous company that I worked for.  At the time it was faster than Entity Framework 6 by a large amount.  The .Net Core version of Entity Framework has fixed the performance issues of EF and it no longer makes sense to use NHibernate.  I am providing the numbers in this test just for comparison purposes.  NHibernate is still faster than ADO and Dapper for everything except the select.  Both EF-7 and NHibernate are so close in performance that I would have to conclude that they are the same.  The version of NHibernate used for this test is the latest version as of this post (version 4.1.1 with fluent 2.0.3).

Entity Framework 7 for .Net Core

I have updated the NuGet packages for .Net Core for this project and re-tested the code to make sure the performance has not changed over time.  The last time I did a smackdown with EF .Net Core I was using .Net Core version 1.0.0, now I’m using .Net Core 1.1.1.  There were not measurable changes in performance for EF .Net Core.

The Results

Here are the results side-by-side with the .ToList() method helper and without:

Test for Yourself!

First, you can download the .Net Core version by going to my GitHub account here and downloading the source.  There is a SQL script file in the source that you can run against your local MS SQL server to setup a blank database with the correct tables.  The NHibernate speed test code can also be downloaded from my GitHub account by clicking here. The ADO version is here.  Finally, the Dapper code is here.  You’ll want to open the code and change the database server name.

 

EntityFramework .Net Core Basics

In this post I’m going to describe the cleanest way to setup your Entity Framework code in .Net Core.  For this example, I’m going to create the database first, then I’ll run a POCO generator to generate the Plain Old C# Objects and their mappings.  Finally, I’ll put it together in a clean format.

The Database

Here is the SQL script you’ll need for the sample database.  I’m keeping this as simple as possible.  There are only two tables with one foreign key constraint between the two.  The purpose of the tables becomes obvious when you look at the table names.  This sample will be working with stores and the products that are in the stores.  Before you start, you’ll need to create a SQL database named DemoData.  Then execute this script to create the necessary test data:

delete from Product
delete from Store
go
DBCC CHECKIDENT ('[Store]', RESEED, 0);
DBCC CHECKIDENT ('[Product]', RESEED, 0);
go
insert into store (name) values ('ToysNStuff')
insert into store (name) values ('FoodBasket')
insert into store (name) values ('FurnaturePlus')

go
insert into product (Store,Name,price) values (1,'Doll',5.99)
insert into product (Store,Name,price) values (1,'Tonka Truck',18.99)
insert into product (Store,Name,price) values (1,'Nerf Gun',12.19)
insert into product (Store,Name,price) values (2,'Bread',2.39)
insert into product (Store,Name,price) values (1,'Cake Mix',4.59)
insert into product (Store,Name,price) values (3,'Couch',235.97)
insert into product (Store,Name,price) values (3,'Bed',340.99)
insert into product (Store,Name,price) values (3,'Table',87.99)
insert into product (Store,Name,price) values (3,'Chair',45.99)

POCO Generator

Next, you can follow the instructions at this blog post: Scaffolding ASP.Net Core MVC

There are a few tricks to making this work:

  1. Make sure you’re using Visual Studio 2015.  This example does not work for Visual Studio 2017.
  2. Create a throw-away project as a .Net Core Web Application.  Make sure you use “Web Application” and not “Empty” or “Web API”.

You can navigate to the directory, then open a command window, copy the directory from the top of the explorer window, type “cd ” in the command window and paste the directory there.  Then hit enter and you’ll be in the correct directory.  You’ll need to be inside the directory that contains the project.json file where you copied the NuGet package text (see article at link).  The dotnet ef command will not work outside that directory.  You’ll end up with a command line like this:

dotnet ef dbcontext scaffold "Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer --output-dir Models

Once you have generated your POCOs, you’ll have a directory named Models inside the .Net Core MVC project:

 

 

If you look at Product.cs or Store.cs, you’ll see the POCO objects, like this:

public partial class Product
{
    public int Id { get; set; }
    public int Store { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }

    public virtual Store StoreNavigation { get; set; }
}

Next, you’ll find a context that is generated for you.  That context contains all your mappings:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.Price).HasColumnType("money");
            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });

        modelBuilder.Entity<Store>(entity =>
        {
            entity.Property(e => e.Address).HasColumnType("varchar(50)");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.State).HasColumnType("varchar(50)");
            entity.Property(e => e.Zip).HasColumnType("varchar(50)");
        });
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

At this point, you can copy this code into your desired project and it’ll work as is.  It’s OK to leave it in this format for a small project, but this format will become cumbersome if your project grows beyond a few tables.  I would recommend manually breaking up your mappings to individual source files.  Here is an example of two mapping files for the mappings of each table defined in this sample:

public static class StoreConfig
{
  public static void StoreMapping(this ModelBuilder modelBuilder)
  {
    modelBuilder.Entity<Store>(entity =>
    {
        entity.ToTable("Store");
        entity.Property(e => e.Address).HasColumnType("varchar(50)");
        entity.Property(e => e.Name).HasColumnType("varchar(50)");
        entity.Property(e => e.State).HasColumnType("varchar(50)");
        entity.Property(e => e.Zip).HasColumnType("varchar(50)");
    });
  }
}

And the product config file:

public static class ProductConfig
{
    public static void ProductMapping(this ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.ToTable("Product");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");

            entity.Property(e => e.Price).HasColumnType("money");

            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });
    }
}

Then change your context to this:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.StoreMapping();
        modelBuilder.ProductMapping();
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

Notice how the complexity of the mappings is moved out of the context definition and into individual configuration files.  It’s best to create a configuration directory to put those files in and further divide your config files from your POCO source files.  I also like to move my context source file out to another directory or project.  An example of a directory structure is this:

 

 

 

You can name your directories any name that suits your preferences.  I’ve seen DAC used as Data Access Control, Sometimes Repository makes more sense.  As you can see in my structure above, I grouped all of my database source files in the DAC directory, then the Configuration sub-directory contains mappings, the Domain directory has all of my POCOs and finally I left the context in the DAC directory itself.  Another technique is to split these into different projects and use an IOC container to glue it all together.  You can use that technique to break dependencies between dlls that are generated by your project.

 

Dot Net Core and NuGet Packages

One of the most frustrating changes made in .Net Core is the NuGet package manager.  I like the way the new package manager works, unfortunately, there are still a lot of bugs in the way the package manager works.  I call it the tyranny of intelligent software.  The software is supposed to do all the intelligent work for you, leaving you to do your work as a developer and create your product.  Unfortunately, one or more bugs cause errors to occur and then you have to out-think the smart software and try and figure out what it was supposed to do.  When smart software works, it’s magical.  When it doesn’t work, life as a developer can be miserable.

I’m going to show you how the NuGet manager works in .Net Core and I’ll show you some tricks you’ll need to get around problems that might arise.  I’m currently using Visual Stdio 2015 Community.

The Project.json File

One of the great things about .Net Core is the new project.json file.  You can literally type or paste in the name of the NuGet package you want to include in your project and it will synchronize all the dlls that are needed for that package.  If look real close, you’ll see a triangle next to the file.  There is another file that is automatically maintained by the package manager called the project.lock.json file.  This file is excluded from TFS check-in because it can be automatically re-generated from the project.json file.  You can open the file up and observe the thousands of lines of json data that is stuffed into this file.  Sometimes this file contains old versions of dlls, especially if you created your project months ago and now you want to update your NuGet packages.  If your dependencies section in your project.json file are all flagged as errors, there could be a conflict in the lock file.  You can hold your cursor over any NuGet package and see what the error is, but sometimes that is not very helpful.

To fix this issue, you can regenerate the lock file.  Just delete the file from the solution explorer.  Visual studio should automatically restore the file.  If not, then open up your package manager console window.  It should be at the bottom of visual studio, or you can go to “Tools -> NuGet Package Manager -> Package Manager Console”.    Type “dotnet restore” in the console window and wait for it to complete.

The NuGet Local Cache

When the package manager brings in packages from the Internet, it keeps a copy of each package in a cache.  This is the first place where the package manager will look for a package.  If you use the same package in another project, you’ll notice that it doesn’t take as much time to install it as it did the first time.  The directory is under your user directory.  Go to c:\Users, then find the directory with your user name (the name you’re currently logged in as, or the user name that was setup for your computer when you installed your OS).  Then you’ll see a folder named “.nuget” Open that folder and drill down into “packages”.  You should see about a billion folders with packages that you’ve installed since you started using .Net Core.  You can select all of these and delete them.  Then you can go back to your solution and restore packages.  It’ll take longer than normal to restore all your packages because they must be read from the Internet first.

An easier method to clearing this cache is to go to the package manager console and type in:

nuget locals all -clear

If you have your .nuget/packages folder open, you’ll see all the sub directories disappear.

If the nuget command does not work in Visual Studio, you’ll have to download the NuGet.exe file from here.  Get the recommended latest.  Then search for your nuget execution directory.  For VS2015 it is:

C:\Program Files (x86)\NuGet\Visual Studio 2015

Drop the EXE file in that directory (There is probably already a vsix file in there).  Then make sure that your system path contains the directory.  I use Rapid Environment Editor to edit my path, you can download and install that application from here.  Once you have added to your PATH, then exit Visual Studio and start it back up again.  Now your “nuget” command should work in the package manager console command line.

NuGet Package Sources

If you look at the package console window you’ll see a drop-down that normally shows “nuget.org”.  There is a gear icon next to the drop-down.  Click it and you’ll see the “Package Sources” window.  This window has the list of locations where NuGet packages will be searched for.  You can add your own local packages if you would like, and then add a directory to this list.  You can also update the list with urls that are shown at the NuGet site.  Go to www.nuget.org and look for the “NuGet Feed Locations” header.  Below that is a list of urls that you can put into the package sources window.  As of this blog post, there are two URLs:

Sometimes you’ll get an error when the package manager attempts to update your packages.  If this occurs, it could be due to a broken url to a package site.  There is little you can do about the NuGet site.  If it’s down, you’re out.  Fortunately, that’s a rare event.  For local package feeds, you can temporarily turn them off (assuming your project doesn’t use any packages from your local site).  To turn off one feed, you can go to the “Package Sources” window and just uncheck the check box.  Just selecting one package feed from the drop-down does not prevent the package manager from checking and failing from a bad local feed.

Restart Visual Studio

One other trick that I’ve learned, is to restart Visual Studio.  Sometimes the package manager just isn’t behaving itself.  It can’t seem to find any packages and your project has about 8,000 errors consisting of missing dependencies.  In this instance, I’ll clear the local cache and then close Visual Studio.  Then re-open Visual Studio with my solution and perform a package restore.

Package Dependency Errors

Sometimes there are two or more versions of the same package in your solution.  This can cause dependency errors that are tough to find.  You’ll get a dependency error in one project that has a package version that is newer than the same package in another project that your current project is dependent on.

To find these problems, you can right click on the solution and select “Manage NuGet Packages for Solution…” then click on each package name and look at the check boxes on the right.  If you see two different versions, update all projects to the same version:

Finally

I hope these hints save you a lot of time and effort when dealing with packages.  The problems that I’ve listed here have appeared in my projects many times.  I’m confident you’ll run into them as well.  Be sure and hit the like button if you found this article to be helpful.

 

DBContextOptionsBuilder does not contain a definition for ‘UseSqlServer’

Attempting to use the correct NuGet packages for your code in .Net Core can be challenging.  In this instance there is no project.json error and yet this one property is missing:

This will happen when your EF database project contains at least these two NuGet packages:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

What’s missing is the sql server package.  It took some trial and error to find the right version:

    "dependencies": {
        "Microsoft.EntityFrameworkCore": "1.1.1",
        "Microsoft.EntityFrameworkCore.SqlServer": "1.1.1",
        "NETStandard.Library": "1.6.1"
    },

The easiest way to find the latest version of your packages is to delete from the first digit decimal to the end of the version number and type a “.”:

As you can see from the drop-down that appears, version 1.1.1 is the latest current version (by the time you read this, there could be a newer version).  When I was attempting to fix this problem, there were a lot of forums indicating that the person needed to add “using Microsoft.Data.Entity;” but that’s not the solution in this instance.

I’m posting this on my blog so I have a reference if I run into this problem again.  Hopefully this will help those who got stuck on this crazy minor issue and can’t find a working solution.

 

Dot Net Core Using the IOC Container

I’ve talked about Inversion Of Control in previous posts, but I’m going to go over it again.  If you’re new to IOC containers, breaking dependencies and unit testing, then this is the blog post you’ll want to read.  So let’s get started…

Basic Concept of Unit Testing

Developing and maintaining software is one of the most complex tasks ever performed by humans.  Software can grow to proportions that cannot be understood by any one person at a time.  To compound the issue of maintaining and enhancing code, there is the problem that one small change in code can affect the operation of something that seems unrelated.  Engineers that build something physical, like say a jumbo jet can identify a problem and fix it.  They usually don’t expect a problem with the wing to affect the passenger seats.  In software, all bets are off.  So there needs to be a way to test everything when a small change is made.

The reason you want to create a unit test is to put in place a tiny automatic regression test.  This test is executed every time you change code to add an enhancement.  If you change some code, the test runs and ensures that you didn’t break a feature that you already coded and tested previously.  Each time you add one feature, you add a unit test.  Eventually, you end up with a collection of unit tests covering each combination of features used by your software.  These tests ride along with your source code forever.  Ideally, you want to always regression test every piece of logic that you’ve written.  In theory this will prevent you from breaking existing code when you add a new enhancement.

To ensure that you are unit testing properly, you need to understand coverage.  Coverage is not everything, but it’s a measurement of how much of your code is covered by your unit tests and you should strive to maximize this.  There are tools that can measure this for you, though some are expensive.  One aspect of coverage that you need to be aware of is the combination “if” statement:

if (input == 'A' || input =='B')
{
    // do something
}

This is a really simple example, but your unit test suite might contain a test that feeds the character A into the input and you’ll get coverage for the inner part of the if statement.  However, you have not tested when the input is B and that input might be used by other logic in a slightly different way.  Technically, we don’t have 100% coverage.  I just want you to be aware that this issue exists and you might need to do some analysis of your code coverage when you’re creating unit tests.

One more thing about unit tests and this is very important to keep in mind.  When you deploy this software and bugs are reported, you will need to add a unit test for each bug reported.  The unit test must break your code exactly the way the bug did.  Then you fix the bug and that prevents any other developer from undoing your bug fix.  Of course, your bug fix will be followed by another unit test suite run to make sure you didn’t break any thing else.  This will help you make forward progress in your quest for bug-free or low-bug software.

Dependencies

So you’ve learned the basics of unit test writing and you’re creating objects and and putting one or more unit tests on each method.  Suddenly you run into an issue.  Your object connects to a device for input.  An example is that you read from a text file or you connect to a database to read and write data.  Your unit test should never cause files to be written or data to be written to a real database.  It’s slow, the data being written would need to be cleaned out when the test completed.  What if the tests fail?  Your test data might still be in the database.  Even if you setup a test database, you would not be able to run two versions of your unit tests at the same time (think of two developers executing their local copy of the unit test suite).

The device being used is called a dependency.  The object depends on the device and it cannot operate properly without the device.  To get around dependencies, we need to create a fake or mock database or a fake file I/O object to put in place of the real database or file I/O when we run our unit tests.  The problem is that we need to somehow tell the object under test to use the fake or mock instead of the real thing.  The object must also default to the real database or file I/O when not under test.

The current trend in breaking dependencies involves a technique called Inversion Of Control or IOC.  What IOC does is allow us to define all object create points at program startup time.  When unit tests are run, we substitute the objects that perform database and I/O functions with fakes.  Then we call our objects under test and the IOC system takes care of wiring the correct dependencies together.  Sounds easy.

IOC Container Basics

Here are the basics of how an IOC container works.  I’m going to cut out all the complications involved and keep this super simple.

First, there’s the container.  This is a dictionary of interfaces and classes that is used as a lookup.  Basically, you create your object and then you create a matching interface for your object.  When you call one object from another, you use the interface to lookup which class to call from your object.  Here’s a diagram of object A dependent on object B:

Here’s a tiny code sample:

public class A
{
  public void MyMethod()
  {
    var b = new B();

    b.DependentMethod();
    }
}

public class B
{
  public void DependentMethod()
  {
    // do something here
  }
}

As you can see, class B is created inside class A.  To break the dependency we need to create an interface for each class and add them to the container:

public interface IB
{
  void DependentMethod();
}

public interface IA
{
  void MyMethod();
}

Inside Program.cs:

var serviceProvider = new ServiceCollection()
  .AddSingleton<IB, B>()
  .AddSingleton<IA, A>()
  .BuildServiceProvider();

var a = serviceProvider.GetService<IA>();
a.MyMethod();

Then modify the existing objects to use the interfaces and provide for the injection of B into object A:

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public void MyMethod()
  {
    _b.DependentMethod();
  }
}

public class B : IB
{
  public void DependentMethod()
  {
    // do something here
  }
}

The service collection object is where all the magic occurs.  This object is filled with definitions of which interface will be matched with which class.  As you can see by the insides of class A, there is no more reference to the class B anywhere.  Only the interface is used to reference any object that is passed (called injected) into the constructor that conforms to IB (interface B).  The service collection will lookup IB and see that it needs to create an instance of B and pass that along.  When the MyMethod() is executed in A, it just calls the _b.DependendMethod() method without worrying about the actual instance of _b.  What does that do for us when we are unit testing?  Plenty.

Mocking an Object

Now I’m going to use a NuGet package called Moq.  This framework is exactly what we need because it can take an interface and create a fake object that we can apply simulated outputs to.  First, lets modify our A and B class methods to return some values:

public class B : IB
{
  public int DependentMethod()
  {
    return 5;
  }
}

public interface IB
{
  int DependentMethod();
}

public class A : IA
{
  private readonly IB _b;

  public A(IB b)
  {
    _b = b;
  }

  public int MyMethod()
  {
    return _b.DependentMethod();
  }
}

public interface IA
{
  int MyMethod();
}

I have purposely kept this so simple that there’s nothing being done.  As you can see, DependentMethod() just returns the number 5 in real life.  Your methods might perform a calculation and return the result, or you might have a random number generator or it’s a value read from your database.  This example just returns 5 and we don’t care about that because our mock object will return any value we want for the unit test being written.

Now the unit test using Moq looks like this:

[Fact]
public void ClassATest1()
{
    var mockedB = new Mock<IB>();
    mockedB.Setup(b => b.DependentMethod()).Returns(3);

    var a = new A(mockedB.Object);

    Assert.Equal(3, a.MyMethod());
}

The first line of the test creates a mock of object B called “mockedB”.  The next line creates a fake return for any call to the DependentMethod() method.  Next, we create an instance of class A (the real class) and inject the mocked B object into it.  We’re not using the container for the unit test because we don’t need to.  Technically, we could create a container and put the mocked B object into one of the service collection items, but this is simpler.  Keep your unit tests as simple as possible.

Now that there is an instance of class A called “a”, we can assert to test if a.MyMethod() returns 3.  If it does, then we know that the mocked object was called by object “a” instead of a real object of class A (since that always returns a 5).

Where to Get the Code

As always you can get the latest code used by this blog post at my GitHub account by clicking here.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

 

Get ASP.Net Core Web API Up and Running Quickly

Summary

I’m going to show you how to setup your environment so you can get results from an API using ASP.Net Core quickly.  I’ll discuss ways to troubleshoot issues and get logging and troubleshooting tools working quick.

ASP.Net Core Web API

Web API has been around for quite some time but there are a lot of changes that were made for .Net Core applications.  If you’re new to the world of developing APIs, you’ll want to get your troubleshooting tools up quickly.  As a seasoned API designer I usually focus on getting my tools and logging up and working first.  I know that I’m going to need these tools to troubleshoot and there is nothing worse than trying to install a logging system after writing a ton of code.

First, create a .Net API application using Visual Studio 2015 Community edition.  You can follow these steps:

Create a new .Net Core Web Application Project:

Next, you’ll see a screen where you can select the web application project type (select Web API):

A template project will be generated and you’ll have one Controller called ValuesController.  This is a sample REST interface that you can model other controllers from.  You’ll want to setup Visual Studio so you can run the project and use break-points.  You’ll have to change your IIS Express setting in the drop-down in your menu bar:

Select the name of the project that is below IIS Express (as shown in yellow above).  This will be the same as the name of your project when you created it.

Your next task is to create a consumer that will connect to your API, send data and receive results.  So you can create a standard .Net Console application.  This does not need to be fancy.  It’s just a throw-away application that you’ll use for testing purposes only.  You can use the same application to test your installed API just by changing the URL parameter.  Here’s how you do it:

Create a Console application:

Give it a name and hit the OK button.

Download this C# source file by clicking here.  You can create a cs file in your console application and paste this object into it (download my GitHub example by clicking here).  This web client is not necessary, you can use the plain web client object, but this client can handle cookies.  Just in case you decide you need to pass a cookie for one reason or another.

Next, you can setup a url at the top of your Program.cs source:

private static string url = "http://localhost:5000";

The default URL address is always this address, including the port number (the port does not rotate), unless you override it in the settings.  To change this information you can go into the project properties of your API project and select the Debug tab and change it.

Back to the Console application…

Create a static method for your first API consumer.  Name it GetValues to match the method you’ll call:

private static object GetValues()
{
	using (var webClient = new CookieAwareWebClient())
	{
		webClient.Headers["Accept-Encoding"] = "UTF-8";
		webClient.Headers["Content-Type"] = "application/json";

		var arr = webClient.DownloadData(url + "/api/values");
		return Encoding.ASCII.GetString(arr);
	}
}

Next, add a Console.Writeline() command and a Console.ReadKey() to your main:

static void Main(string[] args)
{
	Console.WriteLine(GetValues());

	Console.ReadKey();
}

Now switch to your API project and hit F-5.  When the blank window appears, then switch back to your consumer console application and hit F-5.  You should see something like this:

If all of this is working, you’re off to a good start.  You can put break-points into your API code and troubleshoot inputs and outputs.  You can write your remaining consumer methods to test each API that you wrote.  In this instance, there are a total of 5 APIs that you can connect to.

Logging

Your next task is to install some logging.  Why do you need logging?  Somewhere down the line you’re going to want to install this API on a production system.  Your system should not contain Visual Studio or any other tools that can be used by hackers or drain your resources when you don’t need them.  Logging is going to be your eyes on what is happening with your API.  No matter how much testing you perform on your PC, you’re not going to get a fully loaded API and there are going to be requests that are going to hit your API that you don’t expect.

Nicholas Blumhardt has an excellent article on adding a file logger to .Net Core.  Click here to read it.  You can follow his steps to insert your log code.  I changed the directory, but used the same code in the Configure method:

loggerFactory.AddFile("c:/logs/myapp-{Date}.txt");

I just ran the API project and a log file appeared:

This is easier than NLog (and NLog is easy).

Before you go live, you’ll probably want to tweak the limits of the logging so you don’t fill up your hard drive on a production machine.  One bot could make for a bad day.

Swashbuckle Swagger

The next thing you’re going to need is a help interface.  This interface is not just for help, it will give interface information to developers who wish to consume your APIs.  It can also be useful for troubleshooting when your system goes live.  Go to this website and follow the instructions on how to install and use Swagger.  Once you have it installed you’ll need to perform a publish to use the help.  Right-click on the project and select “Publish”.  Click on “Custom” and then give your publish profile a name.  Then click the “Publish” button.

Create an IIS website (open IIS, add a new website):

The Physical Path will link to your project directory in the bin/Release/PublishOutput folder.  You’ll need to make sure that your project has IUSR and IIS_IUSRS permissions (right-click on your project directory, select the security tab.  Then add full rights for IUSR and do the same for IIS_IUSRS).

You’ll need to add the url to your hosts file (c:\Windows\System32\drivers\etc folder)

127.0.0.1 MyDotNetWebApi.com

Next, you’ll need to adjust your application pool .Net Framework to “No Managed Code”.  Go back to IIS and select “Application Pools”:

Now if you point your browser to the URL that you created (MyDotNetWebApi.com in this example), then you might get this:

Epic fail!

OK, it’s not that bad.  Here’s how to troubleshoot this type of error.

Navigate to your PublishOutput folder and scroll all the way to the bottom.  Now edit the web.config file.  Change your stdoutLogFile to “c:\logs\stdout”

Refresh your browser to make it trigger the error again.  Then go to your c:\logs directory and check out the error log.  If you followed the instructions on installing Swagger like I did, you might have missed the fact that this line of code:

var pathToDoc = Configuration["Swagger:Path"];

Requires an entry in the appsettings.json file:

"Swagger": {
  "Path": "DotNetWebApi.xml"
}

Now go to your URL and add the following path:

www.yoururl.com/swagger/ui

Next, you might want to change the default path.  You can set the path to another path like “help”.  Just change this line of code:

app.UseSwaggerUi("help");

Now you can type in the following URL to see your API help page:

www.yoururl.com/help

To gain full use of Swagger, you’ll need to comment your APIs.  Just type three slashes and a summary comment block will appear.  This information is used by Swagger to form descriptions in the help interface.  Here’s an example of commented API code and the results:

Update NuGet Packages

.Net Core allows you to paste NuGet package information directly into the project.json file.  This is convenient because you don’t have to use the package manager to search for packages.  However, the versions of each package are being updated at a rapid rate, so even for the project template packages there are updates.  You can startup your Manage NuGet Packages window and click on the “Updates” tab.  Then update everything.

The downside of upgrading everything at once is that you’ll probably break something.  So be prepared to do some troubleshooting.  When I upgraded my sample code for this blog post I ran into a target framework runtime error.

Other Considerations

Before you deploy an API, be sure to understand what you need as a minimum requirement.  If your API is used by your own software and you expect to use some sort of security or authentication to keep out unwanted users, don’t deploy before you have added the security code to your API.  It’s always easier to test without using security, but this step is very important.

Also, you might want to provide an on/off setting to disable the API functions in your production environment for customers until you have fully tested your deployment.  Such a feature can be used in a canary release, where you allow some customers to use the new feature for a few days before releasing to all of your customers.  This will give you time to estimate load capabilities of your servers.

I also didn’t discuss IOC container usage, unit testing, database access, where to store your configuration files, etc.  Be sure and set a standard before you go live.

One last thing to consider is the deployment of an API.  You should create an empty API container and check it into your version control system.  Then create a deployment package to be able to deploy to each of your environments (Development, QA, stage, production, etc.).  The sooner you get your continuous integration working, the less work it will be to get your project completed and tested.  Manual deployment, even for a test system takes a lot of time.  Human error being the number one killer of deployment efficiency.

Where to Get the Code

As always, you can download the sample code at my GitHub account by clicking here (for the api code) and here (for the console consumer code).  Please hit the “Like” button at the end of this article if this subject was helpful!

 

DotNet Core Target Framework Runtime Error

One of the common events in the new .Net Core is the crazy somewhat obscure errors that occur.  I was recently working with a Web API in Core.  When I created a publish profile for the API, I got this error:

So it looks like everything is falling apart.  Next I copied the following into Google to see if I can stumble onto a quick fix:

Can not find runtime target for framework '.NETCoreApp,Version=v1.0' compatible with one of the target run times:

After reading several posts on Stack Overflow, I discovered that the key to fixing this error is the fact that it’s looking for a specific run-time environment, and it was looking for “win7-x64”.  Inside my config.json file was no run-time environment and I had tried one of the Stack Overflow suggestions of adding this:

But the right run-time was this (which is listed in step 2 of the error message):

Which is exactly as it’s spelled in the error message.  I think the steps 1, 2 and 3 just add confusion to the error message, but that might be an error message for many possible problems and the compiler isn’t sophisticated enough to figure it out.  Anyway, here’s the Stack Overflow article describing how to fix this error:

Can not find runtime target for framework .NETCoreApp=v1 compatible with one of the target runtimes

 

Dot Net Core Project Renaming Issue

Summary

I’m going to quickly demonstrate a bug that can occur in .Net Core and how to fix it quickly.  The error produced is:

The dependency LibraryName >= 1.0.0-* could not be resolved.

Where “LibraryName” is a project in your solution that you have another project linked to.

Setup

Create a new .Net Core project and add a library to it named “SampleLibrary”.  I named my soluion DotNetCoreIssue01.  Now add a .Net Core console project to the solution and name it “SampleConsole”.  Next, right-click on the “references” of the console application and add Reference.  Click the check box next to “SampleLibrary” and click the Ok button.  Now your project should build.

Next, rename your library to “SampleLibraryRenamed” and go into your project.json file for your console and change the dependencies to “SampleLibraryRenamed”.  Now rebuild.  The project is now broken.

Your project.json will look like this:

projectjson


And your Error List box will look like this:

errorlist



How To Fix This

First, you’ll need to close Visual Studio.  Then navigate to the src directory of your solution and rename the SampleLibrary directory to SampleLibraryRenamed.  

Next, you’ll need to edit the src file.  This fill is located in the root solution directory (same directory that the src directory is located.  It should be named “DotNetCoreIssue01.sln” if you named your solution the same name as I mentioned above.  Look for a line containing the directory that you just renamed.  It should look something like this (sorry for the word wrap):

Project(“{8BB2217D-0F2D-49D1-97BC-3654ED321F3B}”) = “SampleLibraryRenamed”, “srcSampleLibrarySampleLibraryRenamed.xproj”, “{EEB3F210-4933-425F-8775-F702192E8988}”

As you can see the path to the SampleLibraryRenamed project is srcSampleLibrary which was just renamed.  Make that the same as the directory just changed: srcSampleLibraryRenamed

Now open your solution in Visual Studio and all will be well.

 

Dot Net Core

I’ve been spending a lot of time trying to get up to speed on the new .Net Core product.  The product is at version 1.0.1 but everything is constantly changing.  Many NuGet packages are not compatible with .Net Core and the packages that are compatible are still marked as pre-release.  This phase of a software product is called the bleeding edge.  Normally, I like to avoid the bleeding edge, and wait for a product to at least make it to version 1.  However, the advantages of the new .Net Core make the pain and suffering worth it.

The Good

Let’s start with some of the good features.  First, the dll’s are redesigned to allow better dependency injection.  This is a major feature that is long overdue.  Even the MVC controllers can be unit tested with ease.

Next up is the fact that dll’s are no longer added to projects by themselves and the NuGet package manager determines what your project needs.  I have long viewed NuGet as an extra hassle, but Microsoft has finally made this a pleasure to work with.  In the past, NuGet just makes version control hard because you have to remember to exclude the NuGet packages from your check-in.  This has not changed (not in TFS anyway), but the way that NuGet works with projects in .Net Core has changed.  Each time your project loads, the NuGet packages are loaded.  What packages are used is determined by the project.json file in each project (instead of the old nuget packages.config file).  Typing in a package name and saving the project.json will cause the package to load.  This cuts your development time if you need a package loaded into multiple projects.  Just copy the line of code in your project.json file and paste into other project.json files.

It appears that Microsoft is leaning more toward XUnit for unit testing.  I haven’t used XUnit much in the past, but I’m starting to really warm up to it.  I like the simplicity.  No attribute is needed on the unit class.  There is a “Theory” attribute that can feed inline data into a unit test multiple times.  This turns a unit test into one test per input set.

The new IOC container is very simple.  In an MVC controller class, you can specify a constructor with parameters using your interfaces.  The built-in IOC container will automatically match your interface with the instance setup in the startup source.

The documentation produced by Microsoft is very nice: https://docs.asp.net/en/latest/intro.html.  It’s clean, simple and explains all the main topics.

The new command line commands are simple to use.  The “dotnet” command can be used to restore NuGet packages with the “dotnet restore” command.  The build is “dotnet build” and “dotnet test” is used to execute the unit tests.  The dotnet command uses the config files to determine what to restore, build or test.  This feature is most important for people who have to setup and deal with continuous integration systems such as Jenkins or Team City.

The Bad

OK, nothing is perfect, and this is a very new product.  Microsoft and many third-party vendors are scrambling to get everything up to speed, but .Net Core still in the early stages of development.  So here is a list of hopefully temporary problems with .Net Core.

The NuGet package manager is very fussy.  Many times I just use the user interface to add NuGet packages, because I’m unsure of the version that is available.  Using a wild-card can cause a package version to be brought in that I don’t really want.  I seem to spend a lot more time trying to make the project.json files work without error.  Hopefully, this problem will be diminished after the NuGet packages catch up to .Net Core.

If you change the name of a project that another project is dependent on you’ll get a build error.  In order to fix this issue you need to exit from Visual Studio and rename the project directory to match and then fix the sln file to recognize the same directory change.

Many 3rd party products are do not support .Net Core yet.  I’m using Resharper Ultimate and the unit tests do not work with this product.  Therefore, the code coverage tool does not work.  I’m confident that JetBrains will fix this issue within the next month or two, but it’s frustrating to have a tool I rely on that doesn’t work.

Many of the 3rd party NuGet packages don’t work with .Net Core.  Fake it easy is one such package.  There is no .Net Core compatible package as of this blog post.  Eventually, these packages will be updated to work with Core, but it’s going to take time.

What to do

I’m old enough to remember when .Net was introduced.  It took me a long time to get used to the new paradigm.  Now there’s a new paradigm, and I intend to get on the band-wagon as quick as I can.  So I’ve done a lot of tests to see how .Net Core works and what has been changed.  I’m also reading a couple of books.  The first book I bought was the .Net Core book:

dotnetcorebook

This is a good book if you want to browse through and learn everything that is new in .Net Core.  The information in this book is an inch deep and a mile wide.  So you can use it to learn what technologies are available, and then zero in on a subject that you want to explore, then go to the Internet and search for research materials.

The other book I bought was this one:

proaspcorebook

This book is thicker than the former and the subject is narrowed somewhat.  I originally ordered this as an MVC 6 book, but they delayed selling the book and renamed it Core.  I’m very impressed by this book because each chapter shows a different technology to be used with MVC and there are unit tests with explanations for each.  So there is an application that the author builds throughout the book.  Each chapter builds on the previous program and adds some sort of functionality, like site navigation or filtering.  Then the author explains how to write the unit tests for these features in the chapter that contains the feature.  Most books go through chapter by chapter with different features, then there is a chapter on how to use the unit test features of a product.  This is a refreshing change from that technique.

I am currently working through this book to get up to speed as quick as possible.  I would recommend any .Net programmer to get up to speed on Core as soon as possible.