One of my previous posts described how to setup a MS SQL Lite server called LocalDB. My little demo project worked really good except for one detail. It was slow. The primary problem was that I was starting the instance, creating the database in the unit test initialize (called [TestInitialize]) and then I was dropping the database and stopping the instance in the test cleanup method. That caused a database create and destroy for each unit test. Now I’m going to show how to setup your unit tests to create the database once and destroy it once. I’m also going to describe how to clean out your data for the next test.
The Truncate Table Command
The truncate table command has a couple of nice features: It can delete data fast, and it resets the seed value used for the identity column. So I setup a list of tables to be truncated and a method that will perform the actual truncation. Here’s how I implemented the list (which should be contained in your unit test module):
public static class SampleDatabase
public static string TableList
// make sure these are in the right order.
return new string
The code that handles the truncation looks like this:
public static void TruncateData(string databaseName, string tableList)
SqlConnection db = new SqlConnection(
“server=(localdb)\” + databaseName + “testinstance;” +
“database=” + databaseName + “; ” +
“Integrated Security=true; ” +
foreach (var item in tableList)
SqlCommand myCommand = new SqlCommand(
@”TRUNCATE TABLE ” + databaseName + “..” + item, db);
You can look at the sample code to see the details of how this fits together. Basically, I call my TruncateData method inside the TestCleanup method to truncate the tables after every unit test. When you create your list of tables to truncate, be aware that if you have constraints on your test database, you’ll have to truncate your tables in the right order.
Now, there is an AssemblyInitialize and an AssemblyCleanup attribute that can be used once inside the unit test project. These are used with static methods and are called before your unit tests start and after your unit tests end. I created a cs file named “AssemblyUnitTestShared.cs” that contains this code:
public sealed class AssemblyUnitTestShared
public static void ClassStartInitialize(TestContext
public static void ClassEndCleanup()
At this point your unit tests would look something like this:
public void test_method_2()
using (ISession unitTestSession =
Product product = new Product
Name = “P 1“,
ProductId = 1,
store = 1
ProductObject productObject = new
var productList = productObject.ReadAll();
Assert.AreEqual(“P 1“, productList.Name);
When executing these unit tests there will be an initial delay while the database is being created, then all the tests will run in a few seconds time, then there is a delay while the database is being torn down. Adding more tests will not significantly increase your total unit test execution time.
Here’s the source code, which I would recommend downloading and getting it to work. This can serve as a model for your project unit tests.
At this point, I have shown how you can use LocalDB to create a temporary database and run unit tests against methods that contain database operations. There are still some issues here that should be resolved.
First, what about integrity constraints. One method is to apply all the constraints to the database after the tables are created (which currently occurs on the first call to the nhibernate context). The problem with this method is that each unit test would need to incorporate all the required seed data that constrains any tables that are needed for the actual test. This is a lot of unnecessary work.
Another method is to only generate the constraints at the unit test level and destroy all the constraints at the end of the unit tests. I have not tried this yet, but there will be some sort of performance penalty. If the number of constraints is restricted to just the tables that are necessary, then the performance loss will be minimized and the unit tests can be contained within the scope of what is being tested.
So now we would need to come up with code to generate and destroy these constraints. One of my older methods of doing this is to create a list of all the constraints in the database and write a method that receives a list of tables. Any foreign constraints would be added only if both tables were present in the list. Unique constraints added if the table with the constraint was in the list and so on. Then the tear-down can be accommodated by scanning the master database for foreign and unique constraints. This code can all be encapsulated in a common project that you would use with your unit tests (except the table list which you have here and the list of actual constraints, because they are database dependent).
The next issue surrounds the fact that your database will be updated at some time in the future and you’ll need to be aware that any constraints changed in the database have to be changed in your test package. There is no simple solution for this problem. In the past I have created software that reads the tables, fields, constraints, indexes, etc from an excel file. This excel file was the master definition of everything. Any database changes would be made to this file and a tool I developed would synch changes with the actual database (when the developer was ready to apply changes). This excel file was then used by my unit tests and out of synch issues were minimized in this fashion.
Another issue surrounds the fact that you need to seed your data for a unit test, and this can get very tedious in a hurry. So I have written classes and methods to read data from an xml file (embedded in the unit test project) to pre-populate the data in a unit test before it runs. Then I name my xml file with the unit test source name and the method name that calls it. I use individual files for each method because data can be altered in the future (which could mess up multiple tests if they share one xml file). Copy and paste is my friend. The layout of such a file would look something like this:
<field1>data in field 1</field1>
<field2>data in field 2</field2>
<field3>data in field 3</field3>
One last thing to be aware of. It seems that LocalDB does not use the FILEGROWTH parameter. I have not dug too deep on this issue (I just increased my starting database size to 8k), but you might run into an insufficient size problem if you insert too much data into your unit test database. You can either bump up the size or determine if there is a mechanism to auto expand.
If your looking for the SQLLocalDB package to install:
x64/SQLLocalDB.MSI for 64 bit systems (33.0MB)
x86/SQLLocalDB.MSI for 32 bit systems (27.7 MB)