What .Net Developers Should Know about MS SQL and IIS

Summary

In this post, I’m going to explain a couple techniques and tools that every developer should know.  If you are just graduating from college and you are looking for a .Net job, learn these simple techniques and advance your career.  If you’re a mid-level to advanced .Net developer and you don’t know these tools and techniques yet, learn them.  These tools and techniques will save you time and give you the edge to building better software.

SQL Profiler

Let’s assume you have built this outstanding program.  It’s a work of art.  It uses .Net Core 2.0 with IOC containers, Entity Framework Core 2.0 and other advanced technologies.  You’re testing your web-based software and you notice a pause when you click on a button or a page is loading.  The first thing that pops into my mind is: What is causing the slowdown?  Is it the database or the IIS server?  Finally, what can I do about it?

Let’s eliminate or confirm the database.  If you have installed the profiler tool in SQL (I’m going to assume you did, otherwise you’ll need to start the installer and install this tool), then go to the Tools menu and select SQL Server Profiler.  A new window will open and you’ll need to connect to your database instance as though you are opening the management studio itself.  Once you open the profiler, it’s time to execute that page that you are having issues with.  You can click on the stop button and use the eraser to clean up any records that have shown in the window already.  Get to the point where you are about to click the button to your web page.  Then hit the run button in the profiler and hit the web page.  Once the page loads, then hit the stop button in your profiler so nothing else will record.  Now you have records to analyze.  You’ll be surprised at the number of database calls EF will perform.

I used this project as a demo for the screenshot coming up:

https://github.com/fdecaire/MVCWithAutoFacIOC

Download the project, run the database create script (store_product_database_setup.sql) and run the application.  You should see something like this:

As you can see there is one call to the database and you can see the “select” command.  Click on the “BatchStarting” line and notice the detail in the window at the bottom:

Now you can scrape that query and paste it into the Server Management Studio and test the exact query that your Entity Framework is sending to SQL:

This will indicate if you are querying for too many fields, or the total records queried are crazy.  If you discover that your query result was a million records and your interface only shows the top 10 records, then you need to tweak your LINQ query to only ask for 10 records.  In this tiny example we have three records.  Let’s make it ask for 2 records.  Here’s my original LINQ query:

(from s in _db.Stores select s).ToList();

I changed it to:

(from s in _db.Stores select s).Take(2).ToList();

Re-run the program, capture the data in profiler and this is what I get:

Notice the “Top(2)” difference in the query.  This is the kind of performance tuning you should be aware of.  It’s very easy to create C# code and LINQ queries, only to never understand what is really going on behind the scenes.  Entity Framework takes your LINQ query and turns it into a string that represents a SELECT query and transmits that to MS SQL.  Then MS SQL queries the database, returns the results so that EF can turn it back into a list of objects.  With SQL profiler, you can get into the nuts and bolts of what is really going on and I would recommend you run profiler at least once after you have built your software and think it is ready for prime-time.  If you see a query pause, copy the profile SQL query into management studio and see if you can speed up the query and get the query results that you need.

Another tool you can use is the estimated execution plan tool.  The toolbar button looks like this:

This tool will break your query down into the pieces that will be executed to form the results.  In the case of my previous query, there is only one piece:

That piece of the query costs 100% of the execution time.  If your query included a union and maybe some sub-queries, this tool is very useful in determining which part of the query is costing you the most processing cycles.  Use this tool to decide which part of your query you want to focus your energy on.  Don’t waste time trying to optimize the portion of your query that only takes 2% of the execution time.  Maybe you can get that to go twice as fast, but the overall query will only be about 1% faster than before.  If you focus your energy on a section that takes 98% of your execution time, then you’ll be able to boost the performance in a noticeable manner.

Web Server Troubleshooting

If you’re using IIS, some day you’ll run into a problem where you don’t get any logs and your website or API crashes immediately (usually a 500 error).  This is always a difficult problem to troubleshoot, until you realize that there are only a hand-full of problems that cause this.  The most common problem is an issue with the XML formatting in your web.config file.  I can’t tell you how many times I have been bit by this problem!  The easiest way to test and troubleshoot this error is to open the IIS manager control panel, select your website and then click on one of the icons that displays a section of your web.config file, like “Modules”:

If there is an error, then the line number in the web.config file will be shown.  You’ll be able to look at the xml in the web.config and see your missing tag, extra quote or other symbol (sometimes it’s an “&”, “>” or “<” symbol inside your database connection string password or something).  Fix the web.config issue and go back to modules again.  If there is another error, then fix it and return again, until it works.

On .Net Core, there is an error log that can report startup errors.  This logging happens before your log program starts and is very handy for situations where you don’t get any logging.  When you publish your site to a folder in .Net Core (or Core 2.0), you’ll end up with a bunch of dlls, some config files and a web.config file.  The web.config file is mostly empty and might look like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
 <system.webServer>
 <handlers>
 <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
 </handlers>
 <aspNetCore processPath="dotnet" arguments=".\Website.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
 </system.webServer>
</configuration>

Change your “stdoutLogFile” parameter to point to a file location that you can find.  I usually set mine to “C:\logs\myapplication_logging_error.txt” or something like that.  Then I run the program until it crashes and check in the c:\logs directory to see if the file exists.  If it does, it usually contains information about the crash that can be used to troubleshoot what is going on.  I’m assuming at this point in your troubleshooting, the website or API works from Visual Studio and you are having issues with the deployed application.  If you are having issues with executing your application in Visual Studio, you should be able to zero in on the error in VS using breakpoints and other tools.

For NLog there is an error log for the error log.  This is where the errors go when there is an error detected in the NLog code.  Usually caused by a configuration error.  At the top of your nlog.config file should be something like this:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 autoReload="true"
 internalLogLevel="Warn"
 internalLogFile="C:\logs\myapplication_nlog.txt">

As you can see, there is an “internalLogFile” parameter.  Set this to a location and filename that you can find.  Then execute your program and see if that log file appears.  If it does, then open it up and examine the contents.  Usually it’s an exception error involving the NLog logger.

Application Pools

The application pool type must match the .Net version that you are using.  For .Net Core and Core 2.0, you’ll need to use “no managed code”.  For .Net, you’ll set your pool to “.Net Framework v4.0”.  If you are unfortunate enough to be using really obsolete legacy code, you can set it to “.Net Framework v2.0”.

When you deploy your Core 2.0 application, you may need to cycle the application pool.  You’ll run into a problem where you deploy new code and the “bug fix” or “new feature” does not show up.  Cycle the app pool and see if it fixes your problem.  The application pool might be using a copy of the previous program in memory.

Sites

Inside your site settings, there is an ASP settings section:

You can turn on debugging flags inside here.  For your local development system, turn it all on:

Leave these settings off for any production system.  You do not want to advertise the line numbers of your code (and your code) when you have a crash.  For development purposes, you want to know all the data you can get from an exception.  If these settings are on, you’ll get the full dump of the exception message in your browser and then you can return to Visual Studio to find the exact line of code where the error occurred.

I hope this saves you some time when a frustrating deployment problem occurs.  The trick is to learn the most common causes of deployment problems so you don’t spend all day troubleshooting a tiny error instead of working on that next big chunk of code.

 

 

 

 

Creating POCOs in .Net Core 2.0

Summary

I’ve shown how to generate POCOs (Plain Old C# Objects) using the scaffold tool for .Net Core 1 in an earlier post.  Now I’m going to show how to do it in Visual Studio 2017 with Core 2.0.

Install NuGet Packages

First, you’ll need to install the right NuGet Packages.  I prefer to use the command line because I’ve been doing this so long that my fingers type the command without me thinking about it.  If you’re not comfortable with the command line NuGet window, you can use the NuGet Package Manager Settings window under the project you want to create your POCOs in.  If you want, you can copy the commands here and paste them into the NuGet Package Manager Console window.  Follow these instructions:

  1. Create a .Net Core 2.0 library project in Visual Studio 2017.
  2. Type or copy and paste the following NuGet commands into the Nuget Package Manager Console window:
install-package Microsoft.EntityFrameworkCore.SqlServer
install-package Microsoft.EntityFrameworkCore.Tools
install-package Microsoft.EntityFrameworkCore.Tools.DotNet

If you open up your NuGet Dependencies treeview, you should see the following:

Execute the Scaffold Command

In the same package manager console window use the following command to generate your POCOs:

Scaffold-DbContext "Data Source=YOURSQLINSTANCE;Initial Catalog=DATABASENAME;Integrated Security=True" Microsoft.EntityFrameworkCore.SqlServer -OutputDir POCODirectory

You’ll need to update the datasource and initial catalog to point to your database.  If the command executes without error, then you’ll see a directory named “POCODirectory” that contains cs files for each table in the database you just converted.  There will also be a context that contains all the model builder entity mappings.  You can use this file “as-is” or you can split the mappings into individual files.

My process consists of generating these files in a temporary project, followed by copying each table POCO that I want to use in my project.  Then I copy the model builder mappings for each table that I use in my project.

What This Does not Cover

Any views, stored procedures or functions that you want to access with Entity Framework will not show up with this tool.  You’ll still need to create the result POCO for views, stored procedures and functions by hand (or find a custom tool).  Using EF with stored procedures is not recommended.  Anyone who has to deal with legacy code and legacy database will run into a situation where they will need to interface with an existing stored procedure.

 

.Net MVC Project with AutoFac, SQL and Redis Cache

Summary

In this blog post I’m going to demonstrate a simple .Net MVC project that uses MS SQL server to access data.  Then I’m going to show how to use Redis caching to cache your results to reduce the amount of traffic hitting your database.  Finally, I’m going to show how to use the AutoFac IOC container to tie it all together and how you can leverage inversion of control to to break dependencies and unit test your code.

AutoFac

The AutoFac IOC container can be added to any .Net project using the NuGet manager.  For this project I created an empty MVC project and added a class called AutofacBootstrapper to the App_Start directory.  The class contains one static method called Run() just to keep it simple.  This class contains the container builder setup that is described in the instructions for AutoFac Quick Start: Quick Start.

Next, I added .Net library projects to my solution for the following purposes:

BusinessLogic – This will contain the business classes that will be unit tested.  All other projects will be nothing more than wire-up logic.

DAC – Data-tier Application.

RedisCaching – Redis backed caching service.

StoreTests – Unit testing library

I’m going to intentionally keep this solution simple and not make an attempt to break dependencies between dlls.  If you want to break dependencies between modules or dlls, you should create another project to contain your interfaces.  For this blog post, I’m just going to use the IOC container to ensure that I don’t have any dependencies between objects so I can create unit tests.  I’m also going to make this simple by only providing one controller, one business logic method and one unit test.

Each .Net project will contain one or more objects and each object that will be referenced in the IOC container must use an interface.  So there will be the following interfaces:

IDatabaseContext – The Entity Framework database context object.

IRedisConnectionManager – The Redis connection manager provides a pooled connection to a redis server.  I’ll describe how to install Redis for windows so you can use this.

IRedisCache – This is the cache object that will allow the program to perform caching without getting into the ugly details of reading and writing to Redis.

ISalesProducts – This is the business class that will contain one method for our controller to call.

Redis Cache

In the sample solution there is a project called RedisCaching.  This contains two classes: RedisConnectionManager and RedisCache.  The connection manager object will need to be setup in the IOC container first.  That needs the Redis server IP address, which would normally be read from a config file.  In the sample code, I fed the IP address into the constructor at the IOC container registration stage.  The second part of the redis caching is the actual cache object.  This uses the connection manager object and is setup in the IOC container next, using the previously registered connection manager as a paramter like this:

builder.Register(c => new RedisConnectionManager("127.0.0.1"))
    .As<IRedisConnectionManager>()
    .PropertiesAutowired()
    .SingleInstance();

builder.Register(c => new RedisCache(c.Resolve<IRedisConnectionManager>()))
    .As<IRedisCache>()
    .PropertiesAutowired()
    .SingleInstance();

In order to use the cache, just wrap your query with syntax like this:

return _cache.Get("ProductList", 60, () =>
{
  return (from p in _db.Products select p.Name);
});

The code between the { and } represents the normal EF linq query.  This must be returned to the anonymous function call: ( ) =>

The cache key name in the example above is “ProductList” and it will stay in the cache for 60 minutes.  The _cache.Get() method will check the cache first, if the data is there, then it returns the data and moves on.  If the data is not in the cache, then it calls the inner function, causing the EF query to be executed.  The result of the query is then saved to the cache server and then the result is returned.  This guarantees that the next query in less than 60 minutes will be in the cache for direct retrieval.  If you dig into the Get() method code you’ll notice that there are multiple try/catch blocks that will error out if the Redis server is down.  For a situation where the server is down, the inner query will be executed and the result will be returned.  In a production situation your system would run a bit slower and you’ll notice your database is working harder, but the system keeps running.

A precompiled version of Redis for Windows can be downloaded from here: Service-Stack Redis.  Download the files into a directory on your computer (I used C:\redis) then you can open a command window and navigate into your directory and use the following command to setup a windows service:

redis-server –-service-install

Please notice that there are two “-” in front of the “service-install” instruction.  Once this is setup, then Redis will start every time you start your PC.

The Data-tier

The DAC project contains the POCOs, the fluent configurations and the context object.  There is one interface for the context object and that’s for AutoFac’s use:

builder.Register(c => new DatabaseContext("Server=SQL_INSTANCE_NAME;Initial Catalog=DemoData;Integrated Security=True"))
    .As<IDatabaseContext>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

The context string should be read from the configuration file before being injected into the constructor shown above, but I’m going to keep this simple and leave out the configuration pieces.

Business Logic

The business logic library is just one project that contains all the complex classes and methods that will be called by the API.  In a large application you might have two or more business logic projects.  Typically though, you’ll divide your application into independent APIs that will each have their own business logic project as well as all the other wire-up projects shown in this example.  By dividing your application by function you’ll be able to scale your services according to which function uses the most resources.  In summary, you’ll put all the complicated code inside this project and your goal is to apply unit tests to cover all combination of features that this business logic project will contain.

This project will be wired up by AutoFac as well and it needs the caching and the data tier to be established first:

builder.Register(c => new SalesProducts(c.Resolve<IDatabaseContext>(), c.Resolve<IRedisCache>()))
    .As<ISalesProducts>()
    .PropertiesAutowired()
    .InstancePerLifetimeScope();

As you can see the database context and the redis caching is injected into the constructor of the SalesProjects class.  Typically, each class in your business logic project will be registered with AutoFac.  That ensures that you can treat each object independent of each other for unit testing purposes.

Unit Tests

There is one sample unit test that performs a test on the SalesProducts.Top10ProductNames() method.  This test only tests the instance where there are more than 10 products and the expected count is going to be 10.  For effective testing, you should test less than 10, zero, and exactly 10.  The database context is mocked using moq.  The Redis caching system is faked using the interfaces supplied by StackExchange.  I chose to setup a dictionary inside the fake object to simulate a cached data point.  There is no check for cache expire, this is only used to fake out the caching.  Technically, I could have mocked the caching and just made it return whatever went into it.  The fake cache can be effective in testing edit scenarios to ensure that the cache is cleared when someone adds, deletes or edits a value.  The business logic should handle cache clearing and a unit test should check for this case.

Other Tests

You can test to see if the real Redis cache is working by starting up SQL Server Management Studio and running the SQL Server Profiler.  Clear the profiler, start the MVC application.  You should see some activity:

Then stop the MVC program and start it again.  There should be no change to the profiler because the data is coming out of the cache.

One thing to note, you cannot use IQueryable as a return type for your query.  It must be a list because the data read from Redis is in JSON format and it’s de-serialized all at once.  You can de-searialize and serialize into a List() object.  I would recommend adding a logger to the cache object to catch errors like this (since there are try/catch blocks).

Another aspect of using an IOC container that you need to be conscious of is the scope.  This can come into play when you are deploying your application to a production environment.  Typically developers do not have the ability to easily test multi-user situations, so an object that has a scope that is too long can cause cross-over data.  If, for instance, you set your business logic to have a scope of SingleInstance() and then you required your list to be special to each user accessing your system, then you’ll end up with the data of the first person who accessed the API.  This can also happen if your API receives an ID to your data for each call.  If the object only reads the data when the API first starts up, then you’ll have a problem.  This sample is so simple that it only contains one segment of data (top 10 products).  It doesn’t matter who calls the API, they are all requesting the same data.

Other Considerations

This project is very minimalist, therefore, the solution does not cover a lot of real-world scenarios.

  • You should isolate your interfaces by creating a project just for all the interface classes.  This will break dependencies between modules or dlls in your system.
  • As I mentioned earlier, you will need to move all your configuration settings into the web.config file (or a corresponding config.json file).
  • You should think in terms of two or more instances of this API running at once (behind a load-balancer).  Will there be data contention?
  • Make sure you check for any memory leaks.  IOC containers can make your code logic less obvious.
  • Be careful of initialization code in an object that is started by an IOC container.  Your initialization might occur when you least expect it to.

Where to Get The Code

You can download the entire solution from my GitHub account by clicking here.  You’ll need to change the database instance in the code and you’ll need to setup a redis server in order to use the caching feature.  A sql server script is provided so you can create a blank test database for this project.

 

DotNet Core vs. NHibernate vs. Dapper Smackdown!

The Contenders

Dapper

Dapper is a hybrid ORM.  This is a great ORM for those who have a lot of ADO legacy code to convert.  Dapper uses SQL queries and parameters can be used just like ADO, but the parameters to a query can be simplified into POCOs.  Select queries in Dapper can also be translated into POCOs.  Converting legacy code can be accomplished in steps because the initial pass of conversion from ADO is to add Dapper, followed by a step to add POCOs, then to change queries into LINQ (if desired).  The speed difference in my tests show that Dapper is better than my implementation of ADO for select queries, but slower for inserts and updates.  I would expect ADO to perform the best, but there is probably a performance penalty for using the data set adapter instead of the straight sqlCommand method.

If you’re interested in Dapper you can find information here: Stack Exchange/Dapper.   Dapper has a NuGet package, which is the method I used for my sample program.

ADO

I rarely use ADO these days, with the exception of legacy code maintenance or if I need to perform some sort of bulk insert operation for a back-end system.  Most of my projects are done in Entity Framework, using the .Net Core or the .Net version.  This comparison doesn’t feel complete without including ADO, even though my smackdown series is about ORM comparisons.  So I assembled a .Net console application with some ADO objects and ran a speed test with the same data as all the ORM tests.

NHibernate

NHiberate is the .Net version of Hibernate.  This is an ORM that I used at a previous company that I worked for.  At the time it was faster than Entity Framework 6 by a large amount.  The .Net Core version of Entity Framework has fixed the performance issues of EF and it no longer makes sense to use NHibernate.  I am providing the numbers in this test just for comparison purposes.  NHibernate is still faster than ADO and Dapper for everything except the select.  Both EF-7 and NHibernate are so close in performance that I would have to conclude that they are the same.  The version of NHibernate used for this test is the latest version as of this post (version 4.1.1 with fluent 2.0.3).

Entity Framework 7 for .Net Core

I have updated the NuGet packages for .Net Core for this project and re-tested the code to make sure the performance has not changed over time.  The last time I did a smackdown with EF .Net Core I was using .Net Core version 1.0.0, now I’m using .Net Core 1.1.1.  There were not measurable changes in performance for EF .Net Core.

The Results

Here are the results side-by-side with the .ToList() method helper and without:

Test for Yourself!

First, you can download the .Net Core version by going to my GitHub account here and downloading the source.  There is a SQL script file in the source that you can run against your local MS SQL server to setup a blank database with the correct tables.  The NHibernate speed test code can also be downloaded from my GitHub account by clicking here. The ADO version is here.  Finally, the Dapper code is here.  You’ll want to open the code and change the database server name.

 

EntityFramework .Net Core Basics

In this post I’m going to describe the cleanest way to setup your Entity Framework code in .Net Core.  For this example, I’m going to create the database first, then I’ll run a POCO generator to generate the Plain Old C# Objects and their mappings.  Finally, I’ll put it together in a clean format.

The Database

Here is the SQL script you’ll need for the sample database.  I’m keeping this as simple as possible.  There are only two tables with one foreign key constraint between the two.  The purpose of the tables becomes obvious when you look at the table names.  This sample will be working with stores and the products that are in the stores.  Before you start, you’ll need to create a SQL database named DemoData.  Then execute this script to create the necessary test data:

delete from Product
delete from Store
go
DBCC CHECKIDENT ('[Store]', RESEED, 0);
DBCC CHECKIDENT ('[Product]', RESEED, 0);
go
insert into store (name) values ('ToysNStuff')
insert into store (name) values ('FoodBasket')
insert into store (name) values ('FurnaturePlus')

go
insert into product (Store,Name,price) values (1,'Doll',5.99)
insert into product (Store,Name,price) values (1,'Tonka Truck',18.99)
insert into product (Store,Name,price) values (1,'Nerf Gun',12.19)
insert into product (Store,Name,price) values (2,'Bread',2.39)
insert into product (Store,Name,price) values (1,'Cake Mix',4.59)
insert into product (Store,Name,price) values (3,'Couch',235.97)
insert into product (Store,Name,price) values (3,'Bed',340.99)
insert into product (Store,Name,price) values (3,'Table',87.99)
insert into product (Store,Name,price) values (3,'Chair',45.99)

POCO Generator

Next, you can follow the instructions at this blog post: Scaffolding ASP.Net Core MVC

There are a few tricks to making this work:

  1. Make sure you’re using Visual Studio 2015.  This example does not work for Visual Studio 2017.
  2. Create a throw-away project as a .Net Core Web Application.  Make sure you use “Web Application” and not “Empty” or “Web API”.

You can navigate to the directory, then open a command window, copy the directory from the top of the explorer window, type “cd ” in the command window and paste the directory there.  Then hit enter and you’ll be in the correct directory.  You’ll need to be inside the directory that contains the project.json file where you copied the NuGet package text (see article at link).  The dotnet ef command will not work outside that directory.  You’ll end up with a command line like this:

dotnet ef dbcontext scaffold "Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer --output-dir Models

Once you have generated your POCOs, you’ll have a directory named Models inside the .Net Core MVC project:

 

 

If you look at Product.cs or Store.cs, you’ll see the POCO objects, like this:

public partial class Product
{
    public int Id { get; set; }
    public int Store { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }

    public virtual Store StoreNavigation { get; set; }
}

Next, you’ll find a context that is generated for you.  That context contains all your mappings:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.Price).HasColumnType("money");
            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });

        modelBuilder.Entity<Store>(entity =>
        {
            entity.Property(e => e.Address).HasColumnType("varchar(50)");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");
            entity.Property(e => e.State).HasColumnType("varchar(50)");
            entity.Property(e => e.Zip).HasColumnType("varchar(50)");
        });
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

At this point, you can copy this code into your desired project and it’ll work as is.  It’s OK to leave it in this format for a small project, but this format will become cumbersome if your project grows beyond a few tables.  I would recommend manually breaking up your mappings to individual source files.  Here is an example of two mapping files for the mappings of each table defined in this sample:

public static class StoreConfig
{
  public static void StoreMapping(this ModelBuilder modelBuilder)
  {
    modelBuilder.Entity<Store>(entity =>
    {
        entity.ToTable("Store");
        entity.Property(e => e.Address).HasColumnType("varchar(50)");
        entity.Property(e => e.Name).HasColumnType("varchar(50)");
        entity.Property(e => e.State).HasColumnType("varchar(50)");
        entity.Property(e => e.Zip).HasColumnType("varchar(50)");
    });
  }
}

And the product config file:

public static class ProductConfig
{
    public static void ProductMapping(this ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>(entity =>
        {
            entity.ToTable("Product");
            entity.Property(e => e.Name).HasColumnType("varchar(50)");

            entity.Property(e => e.Price).HasColumnType("money");

            entity.HasOne(d => d.StoreNavigation)
                .WithMany(p => p.Product)
                .HasForeignKey(d => d.Store)
                .OnDelete(DeleteBehavior.Restrict)
                .HasConstraintName("FK_store_product");
        });
    }
}

Then change your context to this:

public partial class DemoDataContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(@"Server=YOURSQLNAME;Database=DemoData;Trusted_Connection=True;");
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.StoreMapping();
        modelBuilder.ProductMapping();
    }

    public virtual DbSet<Product> Product { get; set; }
    public virtual DbSet<Store> Store { get; set; }
}

Notice how the complexity of the mappings is moved out of the context definition and into individual configuration files.  It’s best to create a configuration directory to put those files in and further divide your config files from your POCO source files.  I also like to move my context source file out to another directory or project.  An example of a directory structure is this:

 

 

 

You can name your directories any name that suits your preferences.  I’ve seen DAC used as Data Access Control, Sometimes Repository makes more sense.  As you can see in my structure above, I grouped all of my database source files in the DAC directory, then the Configuration sub-directory contains mappings, the Domain directory has all of my POCOs and finally I left the context in the DAC directory itself.  Another technique is to split these into different projects and use an IOC container to glue it all together.  You can use that technique to break dependencies between dlls that are generated by your project.

 

Dot Net Core In Memory Unit Testing Using xUnit

When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code.  So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core.  I’m only going to setup two projects: DataSource and UnitTests.

The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework.  Normally you would not unit test this project.  It is supposed to be set up as a group of pass-through objects and interfaces.  I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible.  There should be no business logic in this entire project.  In your solution, you should create one or more business projects to contain the actual logic of your program.  These projects will contain the objects under unit test.

The UnitTest project specaks for itself.  It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests.  Why two tests?  Because it’s easy to create a demonstration with one unit test.  Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).

The POCO

I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner.  If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used.  The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table.  This keeps your code separated into logical parts.  For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig.  The mapping class will be a static class so that I can use the extension property to apply the mappings.  I’m getting ahead of myself so let’s start with the POCO:

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal? Price { get; set; }
}

That’s it.  If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file.  All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).

The Mappings

The mappings file looks like this:

using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace DataSource.Repository
{
    public static class ProductConfig
    {
        public static void AddProduct(this ModelBuilder modelBuilder, string schema)
        {
            modelBuilder.Entity<Product>(entity =>
            {
                entity.ToTable("Product", schema);

                entity.HasKey(p => p.Id);

                entity.Property(e => e.Name)
                    .HasColumnName("Name")
                    .IsRequired(false);

                entity.Property(e => e.Price)
                    .HasColumnName("Price")
                    .IsRequired(false);
            });
        }
    }
}

That is the whole file, so now you know what to include in your usings.  This class will be an extension method to a modelBuilder object.  Basically, it’s called like this:

modelBuilder.AddProduct("dbo");

I passed the schema as a parameter.  If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method.  You can and should expand your mappings to include relational integrity constraints.  The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries.  In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time.  This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code.  By creating each table as a POCO and a set of mappings, you can focus on creating your database code first.  Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!

The EF Context

Sometimes I start by writing my context first, then create all the POCOs and then the mappings.  Kind of a top-down approach.   In this example, I’m pretending that it’s done the other way around.  You can do it either way.  The context for this sample looks like this:

using DataSource.Domain;
using DataSource.Repository;
using Microsoft.EntityFrameworkCore;

namespace DataSource
{
    public class StoreAppContext : DbContext, IStoreAppContext
    {
        public StoreAppContext(DbContextOptions<StoreAppContext> options)
        : base(options)
        {

        }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.AddProduct("dbo");
        }
    }
}

You can see immediately how I put the mapping setup code inside the OnModelCreating() method.  As you add POCOs, you’ll need one of these for each table.  There is also an EF context interface defined, which is never actually used in my unit tests.  The purpose of the interface will be used in actual code in your program.  For instance, if you setup an API you’re going to end up using an IOC container to break dependencies.  In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:

services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());

If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use.  In this instance the entry for IStoreAppContext will match the object StoreAppContext.  So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object.  But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later).  So let’s move on to the unit tests, which is what this blog post is really about.

The Unit Tests

As I mentioned earlier, you’re not actually going to write unit tests against your database repository.  It’s redundant.  What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision.  What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision.  You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.

Here’s the first unit test:

[Fact]
public void TestQueryAll()
{
    var temp = (from p in _storeAppContext.Products select p).ToList();

    Assert.Equal(2, temp.Count);
    Assert.Equal("Rice", temp[0].Name);
    Assert.Equal("Bread", temp[1].Name);
}

I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”.  The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database.  We don’t want to be changing a real database when we unit test.  The code for setting up the in-memory data looks like this:

var builder = new DbContextOptionsBuilder<StoreAppContext>()
    .UseInMemoryDatabase();
Context = new StoreAppContext(builder.Options);

Context.Products.Add(new Product
{
    Name = "Rice",
    Price = 5.99m
});
Context.Products.Add(new Product
{
    Name = "Bread",
    Price = 2.35m
});

Context.SaveChanges();

This is just a code snippet, I’ll show how it fits into your unit test class in a minute.  First, a DbContextOptionsBuilder object is built (builder).  This gets you an in memory database with the tables defined in the mappings of the StoreAppContext.  Next, you define the context that you’ll be using for your unit tests using the builder.options.  Once the context exists, then you can pretend you’re connected to a real database.  Just add items and save them.  I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).

Now, you’re probably thinking: I can just call this code from each of my unit tests.  Which leads to the thought: I can just put this code in the unit test class initializer.  Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over.  So your first unit test executed will see two rows Product data, the second unit test will see four rows.  Go head and copy the above code into your constructor like this and see what happens.  You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2.  How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call.  That’s where the IClassFixture comes in.  This is an interface that is used by xUnit and you basically add it to your unit test class like this:

public class StoreAppTests : IClassFixture<TestDataFixture>
{
    // unit test methods
}

Then you define your test fixture class like this:

using System;
using DataSource;
using DataSource.Domain;
using Microsoft.EntityFrameworkCore;

namespace UnitTests
{
    public class TestDataFixture : IDisposable
    {
        public StoreAppContext Context { get; set; }

        public TestDataFixture()
        {
            var builder = new DbContextOptionsBuilder<StoreAppContext>()
                .UseInMemoryDatabase();
            Context = new StoreAppContext(builder.Options);

            Context.Products.Add(new Product
            {
                Name = "Rice",
                Price = 5.99m
            });
            Context.Products.Add(new Product
            {
                Name = "Bread",
                Price = 2.35m
            });

            Context.SaveChanges();
        }

        public void Dispose()
        {

        }
    }
}

Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:

private readonly StoreAppContext _storeAppContext;

public StoreAppTests(TestDataFixture fixture)
{
    _storeAppContext = fixture.Context;
}

What happens is that xUnit will call the constructor of the TestDataFixture object one time.  This creates the context and assigns it to the fixture property.  Then the initializer for the unit test object will be called for each unit test.  This only copies the context property to the unit test object context property so that the unit test methods can reference it.  Now run your unit tests and you’ll see that the same data is available for each unit test.

One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data.  For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic).  Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests.  The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.

Where to Get the Code

You can get the complete source code from my GitHub account by clicking here.

 

Entity Framework Core 1.0 Smackdown!

Summary

In this post I’m going to compare EF 7 or EF Core 1.0 with the previous version of Entity Framework.  I’m also going to throw in the scores for Fluent NHibernate just for kicks.

I’ve done a few of these head to head comparisons, mostly for my own curiosity, but also to provide myself with a chance to learn a new technology that I know I’ll be using for years to come.  Core is the beginning of a whole new paradigm with Microsoft I I’m happy with their decisions.  First, I have to commend them on their choice to include dependency injection right into their new .Net objects.  In addition to that, they have a new in-memory Entity Framework that makes it so much quicker and easier to design unit tests.  

Setup

To get started, I have the free version of Visual Studio 2015 (2015 Community).  Previous Visual Studios came with a free version, but they have always been limited to a web version or a desktop version.  Community 2015 seems to have all the features of 2013 Professional.  The Professional version of 2015 has a lot of nice features, but the Community version is a great way to get students “hooked” on Visual Studio.  

You’ll need to make sure you have all the updates for VS 2015 before you start.  You might also need to install PowerShell 3.0.  You can search Google (or Bing) for PowerShell and download the latest version for your OS.  I’m currently using Windows 7 on this machine.

To create a Core 1.0 application, your best resource is Microsoft’s on-line documentation.  I would start here, and go to the .Net Core download site.  Follow the directions from there.  I’m currently using the Tooling Preview 2.

EF 6.0 vs. EF 7.0 Core

I used my previous smackdown code to test the performance of EF 6.  I have a new machine so I know the timing will be different to start with and I needed a base-line.  Then I copied the performance object and all the POCO and context stuff I used in the EF 6 project to my new .Net Core project.  That’s when I discovered something fishy.  EF 7 was performing very bad for selects.  Inserts, updates and deletes were faster under EF7, but a basic select was 100’s of times slower.  As I dug around with SQL profiler, I discovered that there was no EF performance problem.  The problem was in the .ToList() conversion.  Not sure why this has become such a slug, but I don’t want the .ToList() to skew my performance measurements.

Another problem I discovered was my logic involving the best of 5 runs.  I had coded this incorrectly as thus:

double smallest = -1;
for (int i = 0; i < 5; i++)
{
    double result = TestUpdate();

    if (smallest == -1)
    {
        smallest = result;
    }
    else
    {
        if (result < smallest)
        {
            result = smallest;
        }
    }
}
WriteLine(“UPDATE:” + smallest);

Currently, I’m using the Resharper ultimate add-on by JetBrains.  This tool quickly identified that “result = smallest;” was unused.  That’s because it’s backwards, and should have been “smallest = result;”.  This bug doesn’t throw the result, since the results can vary by a small amount anyway, but it was not intended, so I fixed it in the old version as well as the new.  I also dragged out the old NHibernate test code and refactored to make it mimic the Core 1.0 code as close as possible and ran that to get a base-line for NHibernate.

Here are the results:

As you can see, Microsoft has done a good job of improving the performance of Entity Framework.  I’m impressed.  NHibernate still beats the overall performance, but it’s very tiny and real-world differences could go either way.

Update

I have revisited this problem and re-tested all three scenarios (EF6, Core, NHibernate) with the .ToList() method added to the select section.  One issue I ran into was a bug in my update, select and delete tests.  The first test was an insert test and I looped 5 times to get the best insert times.  That was inserting 10,000 records each time.  Therefore, the update, select and delete times were based on 50,000 records.  Not what I intended.  So I inserted code to delete the records for each loop of the insert and I also deleted and inserted 10,000 records in-between each test (in case I ever run one test without the others).  The above list was updated to reflect the times without using the .ToList() on the select statement.  The below list is with the .ToList():

This indicates that Entity Framework 6 selects and brings in the list of data faster than Core or NHibernate.  I have scrutinized the code endlessly.  If anyone can find anything that looks odd in my code, be sure to leave a message and I’ll look into it and update this post to reflect any bug fixes.

Test for yourself!

I would recommend downloading my code and trying this on your own system.  First, you can download the .Net Core solution by going to GitHub here and downloading the source.  There is a SQL script file in the source that you can run against your local MS SQL server.  The NHibernate speed test code can also be downloaded from my GitHub account by clicking here.  The Entity Framework 6 code can be found by clicking here.  You can ignore the MySQL part, I didn’t use that for this test.

 

MySQL

Summary

Let’s do something different.  I’m going to install the latest version of MySQL, which is version 5.7, which is advertised to be 3x faster.  I’m going to compare some basic database operations between the 32 bit version of MySQL, with the 64 bit version of Microsoft SQL Server 2014 (unfortunately, the MySQL msi is only in a 32 bit version).  I’ll be using Entity Framework to communicate with my basic CRUD operations.

Database Servers

Choosing a database server to build your software around is not very critical when you start a small project.  There are free versions of MS SQL, MySQL and Oracle that you can use.  There are also cheap versions that you can use with commercial projects until your software grows to enterprise level.  At the point where you need highavailability, the licensing costs of these servers can be very critical to the cost of doing business.  Unfortunately, there is no easy way to convert between these database engines.  I have converted between Oracle and MS SQL server in a rare setup involving no stored procedures.  Once a critical mass of stored procedures have been written the game is over.  You’re locked in.  So keep this in mind when you choose which database you use and how you construct your system.  You may be forced to change technologies in the future.

With that in mind, I want to test MySQL against MS SQL and see how they compare.  I will not be doing any stored procedures.

The Test Data

I’m going to do a very simple test to get a feel for the differences between these two databases.  First, I’m not going to use any indexes.  Second, I’m going to use only two simple tables: Department, with one field and Person, with three fields.  I’ll do one test with a foreign key constraint and one test without.  I’ll add an index to the foreign key of the person table in each database.  Both sets of tests will use EF 6.0 and I’ll use the same table objects for both databases.  Only the context will be different.


 
The first set of numbers shows the fastest timing for each operation of 5 tests with no foreign key applied to the person table.  The second set of data shows the same tests with a forign key applied.  There is definitely an advantage to using MS SQL Server, especially for select operations.  I want to also note that I set the change tracking to off for the select operations for both SQL and MySQL.  

To create the tables in MySQL use the following SQL statement:
 
CREATE TABLE `department` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(50) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=30 DEFAULT CHARSET=utf8;

CREATE TABLE `person` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `first` varchar(50) DEFAULT NULL,
  `last` varchar(50) DEFAULT NULL,
  `department` int(11) NOT NULL,
  PRIMARY KEY (`id`),
  KEY `ix_department` (`department`),
  CONSTRAINT `fk_department` FOREIGN KEY (`department`) REFERENCES `department` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=436451 DEFAULT CHARSET=utf8;

To Create the tables in MS SQL:

 CREATE TABLE [dbo].[Department](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [name] [varchar](50) NULL,
 CONSTRAINT [PK_department] PRIMARY KEY CLUSTERED
(
    [id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

CREATE TABLE [dbo].[Person](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [first] [varchar](50) NULL,
    [last] [varchar](50) NULL,
    [department] [int] NOT NULL,
 CONSTRAINT [PK_person] PRIMARY KEY CLUSTERED
(
    [id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE [dbo].[Person]  WITH CHECK ADD  CONSTRAINT [fk_department] FOREIGN KEY([department])
REFERENCES [dbo].[Department] ([id])
GO

ALTER TABLE [dbo].[Person] CHECK CONSTRAINT [fk_department]
GO

Conclusions

This is a very small test and should be not be used as evidence to use one database server over another.  If you are making a decision on which server to use, I would recommend expanding the tables used in this sample to include other field types.  I would also recommend attempting other query types, such as outer joins and subqueries.  If you have a small application written to use one data server or the other (and you don’t have any stored procedures), then you can copy your app and tweak the context to point to the other database.  Then generate a new database and insert some test data for comparison.  See how your application performs.  

One other test I would perform is to setup an Oracle instance and see how it compares with these in a side-by-side test.  The performance difference could be huge, but I would prefer something real-world over any claims made by the companies promoting their products.


Where to Get the Source Code

As always, I have uploaded the source code to my GitHub account.  You can download it by clicking here.  Make sure you fix up any context strings before running the tests.  The MySQL section was setup to require a password.  You’ll have to match that with the password you set in your data server.

 

 

 

Lazy Loading Data

Summary

In this blog post I’m going to demonstrate how to lazy load your data using Entity Framework and SQL Server.  The purpose is to write an object that returns program settings stored in a table, but read from the table only once during the lifetime of the program.

The Data

In a database named “sampledata” I’m going to create a table that contains true/false settings for any number of settings.  This is a simple example for the purposes of showing how to lazy load.  So I will be “pretending” that there is a purpose for these settings even though the demo program doesn’t use any of it.

You can use the following SQL script to generate a table to run this program:

CREATE TABLE [dbo].[ProgramSetting](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [Name] [varchar](50) NOT NULL,
    [Setting] [bit] NOT NULL,
 CONSTRAINT [PK_ProgramSetting] PRIMARY KEY CLUSTERED
(
    [Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

INSERT INTO ProgramSetting (Name,Setting) values (‘RequirePassword‘,1)
INSERT INTO ProgramSetting (Name,Setting) values (‘ShowUserName‘,0)
INSERT INTO ProgramSetting (Name,Setting) values (‘ShowUserPicture‘,0)
INSERT INTO ProgramSetting (Name,Setting) values (‘AllowUserToChangePicture‘,1)

The Class
 
Now, I’m going to design an object that will be used throughout a program to read settings from the database and return the true/false value.  Here’s the basic class:

public class ConfigurationSettings
{
    private static List<string> _Data = null;
    private static List<string> Data
    {
        get
        {
            if (_Data == null)
            {
                using (var db = new ConfigDataContext())
                {
                    _Data = (from ps in db.ProgramSettings
                             where ps.Setting
                             select ps.Name).ToList();
                }
            }
            return _Data;
        }
    }

    public static bool RequirePassword
    {
        get
        {
            return Data.Contains(“RequirePassword“);
        }
    }

    public static bool ShowUserName
    {
        get
        {
            return Data.Contains(“ShowUserName“);
        }
    }

    public static bool ShowUserPicture
    {
        get
        {
            return Data.Contains(“ShowUserPicture“);
        }
    }

    public static bool AllowUserToChangePicture
    {
        get
        {
            return Data.Contains(“AllowUserToChangePicture“);
        }
    }
}

Now, if you look at the code carefully, you’ll see that there is a property that gets the raw data using LINQ and Entity Framework.  You can download the sample code to see the code for the context and the table.  For this blog post all you need to know is that all the data from the ProgramSettings table is read at one time.  It will occur the first time a configuration parameter is read.  Why?  Because the Data.Contains() will call the Data getter which will then check to see if the local variable named _Data is null.  The first time a parameter is read, this will be null and the data will be red into _Data.  The next parameter that is read will cause the _Data to be returned without any read to the database.

The list that is maintained in the variable _Data is just a list of the names of configuration parameters that returned true.  If the parameter is missing or they are false, then we’ll return a false to the calling program.  This can be convenient for new features that can be added to the code first.  They’ll be false until you put parameters in the database and you won’t be stuck with a chicken or the egg problem of when to insert the data into the ProgramSetting table.

Obviously, this works for data that is mostly static.  If you include this code in a stand-alone program and you change a configuration setting in your database, you’ll need to exit from any previously running programs before the changes will take affect.

Another possible problem involves the fact that your configuration parameter names are coded as well as stored in the database.  That means that changing a name in the database, adding a name or deleting a name requires some matching code changes.  This can get tedious if the list of configuration parameters is large.  You could write a program to generate the parameters and just regenerate the parameter list every time you make changes to the database and then update your software.  You can also opt to use some sort of read parameter method that requires the parameter name in a string passed in.  Most likely, you’ll be making changes to your code if you’re adding, deleting or modifying a configuration parameter, so it makes no real difference.

How to Get the Code

You can download the sample code at my GitHub account by clicking here.  Be sure to go to the context source file and change the database connection string to match your own SQL server.

 

 

Entity Framework 6.1.3 vs. Linq2SQL vs. NHibernate Smackdown!

Summary

I’ve done these ad-hoc ORM performance tests in the past.  In this test I’m going to re-test the latest Entity Framework (version 6.1.3) against the latest Linq-to-SQL version.  I’ll be using EF code-first this time to make sure there isn’t any extra overhead.  I’ll also be using Visual Studio 2013.

The Hardware and Software

This test was executed on an older PC with an Intel Core2 Duo CPU running at 3Ghz (E8400).  I’m using 8 Gigabytes of Ram and a solid-state hard drive.  This machine has Windows 10 (64 bit) and I’m using Visual Studio 2013 and Microsoft SQL Server 2014.

The Tests

I ran the tests 5 times each and chose the shortest time for each section (insert, update, delete and select).  I also added the NHibernate ORM just to make this a 3-way smack-down.  I performed the basic 4 CRUD operations (insert, update, select, delete) and compared results for each operation.

The Results

It appears that EF has been improved since the last time I’ve performed this test.  Entity Framework seems to be a bit faster than Linq-to-SQL this time.  However, NHibernate still outperforms both Entity Framework and Linq-to-SQL by a large margin (between 2x and almost 3x).

Here are the results:



Usual Disclaimer

I have to warn you that this experiment is unscientific.  I’m using a PC that I use for general purpose day-to-day operations.  What this means is that I have a virus checker and a backup program that can kick in at any time.  Not to mention the fact that I have automatic updates set to on.  To make this experiment more accurate, I would need to setup a stripped down PC and run comparison tests with no other programs running.  In other words, your results may vary.

So my advice is to download the source from this blog post, compile and run this test on your machine or the expected destination setup (i.e. against your company SQL server).  You can also increase the size of the data set to simulate your expected data volume.


Download the Code

You can go to my GitHub account by clicking here.  Download the zip file and unzip it to your visual studio project directory.  Don’t forget to search the code for the connection string (“My Connection String”) and replace with your own connection string.  This occurs in 3 files: SampleDataContext.cs, App.config and NHibernateSessionFactory.cs.