Orm vs Non-ORM


This blog post is overdue, but I’d like to talk about my database experiences with and without ORMs.  

Before ORMs

I have been writing database driven software for a long time.  My early days include the use of Borland C++ with Paradox and FoxPro databases.  I have also written Borland C++ and PHP code with Oracle 8i.  I’ve also written C++, then C# to access MS SQL Server.  I’ve used ODBC and ADO to access database servers.  They all have their pros and cons, but they are all the same when it gets down to the actual coding.  The basic coding includes opening and closing a database connection, performing a select or execute query and dealing with the results usually with some sort of reader that only goes forward.  PHP had the ability to select into an array and I have used the Enterprise library with C# to select into a DataSet (which can be treated as an array).  These are convenient because you can go forward and backwards through the result set, but they suffer from the fact that all records must be read before you can access the data.

There are two big headaches with non-ORM data access.  First the query is a string that is passed to the database.  One of the easiest placed to get a syntax bug.  The second headache is the return set, which also uses a string to name the field name used (unless you use the field number relying on ordering…OMG).  This is where the promise of ORMs caught my attention.


Object Relational Models looked so cool when they first came out.  The LINQ language didn’t match SQL, but it had built-in syntax checking.  Of course, you can get a query syntactically correct and still get the wrong results, but at least you didn’t misspell “SELECT” or “INSERT” or something trivial like that (I once debugged a program for an hour only to discover that I had spelled “UPDATE” as “UDPATE”).  The really exciting part of the ORM to me was the fact that the result set came back as an object or list of objects.  That makes things really easy on the result end of the query.  With context sensitive programming tools (i.e. Visual Studio), hitting the dot causes a list of the fields to pop-up and voila!  You don’t need to memorize the exact spelling of each field.

My first experience with ORMs was Linq-to-SQL.  This ORM suffers from the fact that you can’t easily regenerate the tables when you add a field in the database.  I switched to Entity Framework because of this headache.  Entity Framework turned out to have some serious performance issues not to mention the massive XML file that stores the data configuration that contains unique keys that are renumbered every time the edmx file is regenerated.  That causes big problems with version control systems.

So I moved on to NHibernate, specifically Fluent NHibernate.  I wasn’t too crazy about NHibernate because of the XML files.  Too much work!  The Fluent add-on makes it nice because all the mappings are in C#.  I did a lot of research up-front and decided that NHibernate was a winner.  After using this product for the past 8 or 9 months, I have to admit it has it’s own set of flaws.  There is a lot of Linq support that is missing from NHibernate such as outer joins.  Some of the field mappings with SQL server has quirks like the 4000 character max problem that crops up everywhere.  Fortunately there are work arounds for this stuff, but there shouldn’t be such glaring problems.

The next issue with ORMs is that I’m not convinced that I have gained any productivity from the use of an ORM (any ORM).  It seems that I spend a ton of time trying to translate the SQL that I want to write (which I’ve done for so many years that I can do complex queries in my head), into Linq.  You also have to be careful about the way you write your Linq code, because ultimately there is a lexer that converts the Linq code into SQL that is transmitted back to the server.  This may end up as really bad (read: inefficient) code.  Before I used ORMs I would workout my complex queries inside Microsoft SQL Server Management Studio and then just copy the text into my code.  Can’t do that with Linq, because it’s not the same syntax.


Am I willing to give up on ORMs and go back to ADO?  Not really.  I like the object list result sets too much to throw in the towel.  I’m also smart enough to know how to profile and test queries that are generated by an ORM.  Finally, I can revert to sending a raw query to SQL and get an object list back, making NHibnernate behave like a micro-ORM (until NHibernate or another ORM comes by and fixes the issues).  I think Entity Framework could be THE killer product if Microsoft focused on it’s two major flaws: Performance and the massive edmx file that is used.  The Entity Framework development team blog indicates to me that they are oblivious to these weaknesses or that these are low on their priority list.  Microsoft suffers from a priority scheme involving features first, performance and reliability last.

OK, I’m going to stop ranting now!


Visual Studio Project Files


In this blog post I’m going to talk about Visual Studio’s proj file which is nothing more than an XML file used to keep track of the resources inside a project.  

The Setup

Open your Visual Studio (I’m currently using VS 2012) and create a console application.  If you have your solution explorer window open you should see something like this:

Now, let’s create a cs file in the solution and make an empty class.  I just right-clicked on the project (ProjectFileBlogPost in this example)  then selected “Add” then “Class”.  Now click on your project to make sure it’s selected then go to your project menu (on the main menu bar) and select “Show All Files”.  Right-click on your TestClass.cs file and select “Exclude From Project”.  The file will turn white, indicating that it is not part of the project and will not be compiled with your solution.

If you’ve been working with Visual Studio for any length of time, you probably already know how all this works because you can copy a cs file from another project from Windows Explorer and then include it in your project to make it appear in the solution explorer.  Now I’m going to show how the XML project file works.

The Project File

Now we’re going to use Visual Studio to edit the xml file.  First, you need to close your console application (because it locks the file).  Next you’ll need to go to the File menu and open a file (just a plain file, not a project).  Once you open the file, VS will recognize it as an XML file and color code the text accordingly.  

If you scroll through the XML, you’ll noticed that there are a bunch of ItemGroup sections.  The one we’re interested in is one that contains a bunch of “Compile” nodes:

As you can see only the Program.cs file and the AssemblyInfo.cs files are in this group.  If you copy the Program.cs node and change the file name to TestClass.cs and save this file, you’ll be able to open this project in VS and the TestClass.cs file will be included in the project.

If you delete any of these “Compile” nodes, then the file will no longer be included in the project by VS.


If your cs file is in a folder then the full path can be spelled out in the “Compile” node and the folder will be included in your project as well as the file.  However, if you want an empty folder to be included in your project, you’ll need to add it to a different ItemGroup node set.  These nodes are named Folder nodes:

If a folder node exists and there are references in the group of Compile nodes, VS will be OK with that, but it’s not necessary.

Manipulating The Project File

At this point, you probably have a program that generates some sort of C# files and you want those files to be automatically added to some utility project.  It’s best to use the XmlDocument object to manipulate the data in the project file to add any of your C# files and/or directories.  Text parsing can be a nightmare.  You’ll need to use a name space to make it work:

XmlDocument doc = new XmlDocument();

var nsmgr = new XmlNamespaceManager(doc.NameTable);
XmlNodeList itemGroupNodes = doc.SelectNodes(

        “//a:Project/a:ItemGroup“, nsmgr);

// add and delete nodes here


You can spin through all the ItemGroup nodes and look for child nodes that have the name “Folder” or “Compile”, then add Folder or Compile nodes as desired.  You’ll have to account for the possibility that there are no ItemGroup nodes with Folder or Compile nodes in them.  You can use something like this to find the proper ItemGroup nodes:

foreach (XmlNode itemGroupNode in itemGroupNodes)
    XmlNodeList childNodes = itemGroupNode.ChildNodes;

    foreach (XmlNode childNode in childNodes)
        if (childNode.Name == “Folder“)
            // your code here

In order to append a new node to your Folder node, you can do something like this:

XmlNode folderNode = doc.CreateNode(XmlNodeType.Element, 
XmlAttribute xKey = doc.CreateAttribute(“Include“);
xKey.Value = “TestFolder“;

Another thing to note: You should check the namespace name in the top of your project file.  I copied that text into these sample snippets, but I’m using VS 2012 and there is no guarantee that the namespace url won’t change for newer versions of VS.


Software Deployment Process


In this blog post I’m going to talk about the technical details of deploying software.  I have worked at a few companies that started out small and out grew their deployment strategy.  Many of these companies didn’t really have a deployment strategy.  In this post, I’m going to describe the process from development to version control to deployment and testing.

The Repository

The first thing you need to realize is that your company’s source code is an expensive resource.  The price of that source code is measured in man-hours of labor that it takes to develop the software.  Loss of such software has the potential of bankrupting a company and there are such simple and cheap ways to ensure against such a catastrophe that I am dumbfounded when I see a company setup that doesn’t take these steps.

The first step is to create a repository (aka repo).  This is a place where your developers check in and out changes.  This is a major step up from just backing up a common source code directory.  By having a version controlled repository, you can look at any version and roll back or undo changes made by a programmer gone rogue (intentionally or not).  It also aids in troubleshooting bugs that suddenly appear.

If your a small company or you have limited funds you can setup a GitHub or BitBucket account.  These accounts have free accounts and cheap accounts.  For GitHub you’ll have to pay in order to keep your repository private.  BitBucket advertises free unlimited private repos.  My company uses BitBucket and I also have my own BitBucket account for personal use.  I also have a GitHub account.  GitHub currently charges $7 a month for 5 private repositories.  BitBucket has a pricing scheme for number of developers.  You can get up to 5 uses/developers for free and the pricing starts at 10 users for $10 per month.

You’ll have to do some research on each of these repositories before you make your choice.  There are other products that can be used with these repositories and I’m going to talk about those next.

Version Control

You absolutely have to have version control.  As soon as you add your second developer you will begin to lose productivity if you are not using a version control system.  There are plenty of free systems available and they all have their quirks and difficulties.  My company uses Mercurial for version control and it has a steep learning curve.  Mercurial works with BitBucket but not with GitHub.  Git is another version control system that I have not had the time to use yet.  Git works with both GitHub and BitBucket.  If you’d like to know more about the differences between GitHub and BitBucket, there is an excellent article here: Bitbucket vs. GitHub: Which projcect host has the most?  I would recommend reading this article in addition to my blog post.

Mercurial is a distributed version control system which is far superior to the old clunky version control systems such as Visual Source Safe, Microsoft Team Foundation Server and CVS.  These older systems usually allow a user to check out a file and keep it locked until they check it back in.  The problem with this method of version control is that only one developer can work on the files at a time and if they check it out at one location (say at home) and they are at another location, they can’t unlock the file.  Mercurial allows users to check out the whole repository to their local computer.  They work on their changes, then they can either “merge” their changes into the master repo or they can create a branch.  If the merge has conflict then the developer must resolve such conflicts right away.  Mercurial has tools for handling complex situations and the developer can roll-back versions to undo major snafus.


Branching is the best invention since sliced bread was introduced to the toaster.  Let’s pretend for a moment that you have a dozen developers working on the same software.  They are working on different enhancements that could cause them to change the same classes and methods for different purposes.  Now let’s also pretend that there is a deployed version.  If there is only one branch, then the deployment must be carefully controlled so that a partially completed project doesn’t get deployed before it is ready.  Also, every project being worked on must be completed and debugged before the next release can go out.  To make matters worse, a bug fix would be next to impossible once the first enhancement changes have been checked in.  It is also recommended that developers check in their changes at least every night (in case of an overnight disaster).  

Branches are created for each new enhancement and are separated from each other so each enhancement can be completed at any time and then re-merged with the default or master branch.  The default branch would be the branch that is always available for deployment.  To keep things very clean the default branch should never contain un-reviewed code.  The reason the default branch should remain clean is in case an emergency crops up and a bug must be fixed immediately.  If bugs that have been fixed but not reviewed remain on their own branch, then developers can go home on the weekend and an emergency call to fix something will not require someone to hurry up and review the other bugs that are not ready for release.


A staging server is usually a server that is identical to the production server or system.  You can also point your staging system to the production database if your staging tests are read-only.  With a staging web server and a staging database you can run comparison tests between stage and production web sites.  Simple web crawling software can match web pages between your stage web site and production web sites (assuming your comparing with the same database server).  You can also build software to test features of your system like form data entry web pages.  These tests might be scripted and take hours to run, but you can run those scripts at night and deploy first thing in the morning (after reviewing your test results).

Code Reviews

My company performs code reviews.  That means that we have another developer double-check the code written by the first developer.  The feature or bug is tested and the code is looked over and then it is checked off.  We use Jira to track the progress of a task and one of the steps is the code review.  Jira integrates into BitBucket and our version control system is linked right to the ticket so the code reviewer can quickly find the branch and changes that were made.  Once the code is reviewed and the ticket is dragged into the ready for release swim lane, then the branch is merged.  Sometimes the reviewer merges the branch, sometimes the developer, that’s entirely up to you.  The branch gets merged into default and is ready for release.  Many times our company will merge several changes into the default branch and then a release of several versions will go out at once.

Releasing and Deploying

We use TeamCity for our deployment server.  This server uses powershell scripts to check out versions of software from the repository, compile the software, run unit tests and prepare the package for deployment.  Then the developer can run testing scripts against the software in a staging or test environment.  TeamCity is a very powerful product and it integrates with BitBucket and provides roll-back capabilities.  When your developers deploy software, no matter how well the software is tested in staging, it might utterly fail when it is deployed on the production system.  When that occurs, you need to roll back as quick as possible to reduce your customer down-time.  At that point the version that was unsuccessfully deployed must be analyzed and properly fixed.  There is also the possibility that you need to add testing to catch such problems in the future.

BitBucket also has a product called Bamboo.  If I was starting a new system, I would investigate Bamboo and see if it fits your needs better than TeamCity.  With that said, TeamCity is a really good product.  You can use the deployment capabilities of Visual Studio if your system is used only internal to your company or your website isn’t used by many people.  However, if you have a web site or a web farm, you’ll need a proper deployment server.  The deployment server can deploy to multiple servers and it can be configured to signal servers for shutdown (i.e. iis servers) before switching to a new version and then signaled to startup.  We currently deploy our new version of software into a new directory on the web server (naming the directory with the version number) and then the iis server is stopped, the iis server is pointed to the new directory and then started back up.  If something happens and we need to go back, we can manually point the iis server to the previous directory and not wait for the previous version to be re-copied to the web server.  If you have a web farm, it is too tedious to deploy to each server without an automated or scripted method.  TeamCity has the ability to deploy to multiple servers making it a one-click operation.

One other aspect of development to be aware of: Logging.  Use a logging system such as Log4Net.  This software allows your developers to create log files and email error messages to a central email in-box.  Once this is setup and logging is added to every try/catch block (as well as anything in your code you wish to monitor), you will be able to open the email in-box and watch errors come in when you deploy.  If you suddenly begin to get log errors about web page crashes, then you need to roll-back quickly.  Without logging, you might not even realize that your customers are experiencing page crashes.

Development System

One aspect of software development that I see neglected too often is an isolated development system that matches your production system.  I’m not advocating that you spend a lot of money on this system, but it needs to be able to be used by your development team to test software as they are developing it.  No developer should ever test software against a production system.  Such an act is inviting disaster.  At my company we have two MS SQL servers and a development web server (as well as other special purpose servers like ftp).   Most of our development is done on our PC (such as the web application which we normally test with our local iis server).  Keeping a development database in synch with a production database is a massive undertaking.  Our system uses scripts to copy backup files from the production database server and restore to the development servers.  These servers are alternated giving us an alternate database in case the script fails during the night.  If you are designing your first system, give some thought about the size of your customer databases and how you are going to handle your development database.

Another process we have in place is to use deployment scripts.  These are MS SQL query scripts that can modify tables, stored procedures, views, etc. to match changes on a branch.  When the branch is merged with default and deployed, the database update scripts must be run (usually before the deployment).  Some deployments require a clean-up script that must be run after deployment.  These same scripts are checked into a separate repository with its own version control and branches (though we try to name the branch to match the software that they belong to).  The scripts are tested on the test servers and we usually try to provide a roll-back script if the deployment is hazardous (like changes to stored procedures).


Everything in this blog post can be done on the cheap.  If your a small start-up company or you have a limited development budget, you can setup an entire version control and deployment system to protect your investment and reduce your down-time.  By setting up such a system early on (like the day you hire your first programmer), you will avoid the headaches I have seen over the years when scores of programmers are performing dangerous tasks on a production system, or software versions get lost because they don’t have a proper repository setup.  Not to mention downtime and debug time due to the lack of testing tools or versions to compare with.


Introduction to Mongo DB and C#


So I wanted to try out MongoDB and see what it was all about.  In this blog post I’m going to give a quick run-down of how to get things up and running just to show how easy it is to install and connect to a MongoDB database.

Where to Get MongoDB

The official MongoDB website is here, but you can skip right to the download page (http://www.mongodb.org/downloads) and download the windows version of MongoDB.  MongoDB is very easy to install and setup.  Just run the installer like any other windows install program, then you can go straight to the “Windows Quickstart” page (here) to get things setup.

I setup the default configuration by creating a “C:datadb” directory, then I navigated into my C:Program FilesMongoDB 2.6 Standardbin” directory and double-clicked on the mongod.exe file.  I spent some time with the mongo.exe program to manually create “collections” (mongo’s equivalent to a table), but you can skip right to the C# code if you’d like.  The MongoDB website has a great webpage for getting started with the MongoDB commands if you’re familiar with SQL.  You can go here for this page.

The C# Code

I created a console application in Visual Studio 2012 and then I went straight to NuGet and typed in MongoDB in the search box.  The “Official MongoDB C# driver” showed up on top and I installed that one.  Next, I went to the Getting Started With the C# Driver page (here) and started typing in some sample code.

One of the first things I discovered with MongoDB is that an insert will create a collection.  Also any field that is misspelled can cause a record with a different field name to be created in that collection.  

To get started, include the following using statements:

using MongoDB.Bson;
using MongoDB.Driver;
using MongoDB.Driver.Linq;
using MongoDB.Driver.Builders;
using MongoDB.Driver.GridFS;

Then setup a connection and open your database using these commands:

var connectionString = “mongodb://localhost“;
var client = new MongoClient(connectionString);
var server = client.GetServer();
var database = server.GetDatabase(“test“);
var collection = database.GetCollection<Users>(“users“);

These are for the standard MongoDB installation.  

Next, you’ll need an object to represent your data, similar to the way that NHibernate declares records:

public class Users
    public ObjectId Id { get; set; }
    public string FirstName { get; set; }
    public int Age { get; set; }

Now let’s insert some data just to populate a few records:

var entity = new Users { FirstName = “Joe“, Age = 27 };
var id = entity.Id;

I ran the program as is and then changed the FirstName and Age and ran the program again to insert another record.  You can set up a program to insert a bunch of data if you like, I only needed a couple of records to test the LINQ capabilities.

The last part of my program looks like this:

var query =
    from e in collection.AsQueryable<Users>()
    where e.Age < 34
    select e;

foreach (var item in query)


My result, as expected, is a list of first names that were inserted into the sample database.  In a nutshell, this was a very simple example just to demonstrate the basics of getting things up and running.  In order to turn this technology into something serious, we would need to setup MongoDB on a server and make sure it runs reliably (i.e. auto-starts when the machine is rebooted).  We would also need to make sure it is secure and that we can make a connection from the machine running the C# program.  After that, it’s a matter of designing software pretty much the same as any SQL Sever or Oracle program.  Obviously the mappings are simpler than Fluent NHibernate.

My next test will be to create two collections and join them.  Then I need to test performance differences between MongoDB and NHibernate CRUD operations using a long list of names.  After that, I hope to move on to what makes MongoDB “different” and more powerful than a standard SQL server.


Using Oracle with Fluent NHibernate


So far I’ve written a lot of posts about Fluent NHibernate using MS SQL Server.  I wanted to demonstrate how to connect to Oracle and what the differences were between SQL and Oracle.  Getting the connection to work is probably the most difficult part.  After that, the mappings are a bit different, but not complicated.

Installing Oracle

Oracle has a free version of it’s database for development purposes.  This is an excellent strategy for a company to get it’s product into smaller companies.  One of the strongest influences on which database engine is used at a company is based on what the developers are familiar with.  Once a database server has been established, it will be there for a long time (it’s very difficult and expensive to change database servers once a system goes live).  As a developer, I like to keep my options open, so I’ve experimented with many databases in my career.  I’ve also used Oracle and SQL Server on major projects, and I converted a major project from Oracle to SQL Server.

Back to the free version of Oracle that I was talking about…  If you go to Oracle’s website and create an account, you can download the windows version of Oracle XE (Express Edition).  This is an easy to use version of Oracle that you can install on your PC and experiment with.  I’m currently working with 11g Express:

Just download the x64 (I’m assuming you are on a 64 bit machine) and install it.  Once Oracle is installed you’ll have to create your own workspace and create some tables.  Go to the “Get Started” application.  It should startup a browser and show an interface like this:

Next, you’ll need to click on “Application Express”.  This will allow you to setup a new workspace.  You’ll need to use your “SYSTEM” id and password to get to the express application (which you have to set when you install the application).  You should now see this screen:

This is where you’ll setup your workspace.  You can choose a name for your “Database Username” which is not used in the connection string for this sample.  Your Application Express Username and password is important.  You’ll need these for the NHibernate connection string.  Once you have created a workspace you will have access to a lot of tools to administer your database.  

The easiest way to create your tables is to go to the SQL workshop section under Object Browser.  There is a “Create” button that can be used to create new tables.  Click that button, and select “Table”:

Now you can create one table at a time.  For this demo, you’ll need three tables: store, product and producttype.  Start with producttype and use this:

Make sure you set the primary key on each of these tables to populate from new sequence.  This is the same as setting an identity on a primary key in MS SQL:

Then add product:

Then add store:

If you’re feeling brave, you can add in the relational integrity constraints (foreign keys) while you create your tables.  Otherwise, you can dig around for the raw queries to create such constraints.

The Oracle Database Connection

You’ll need the same 3 dlls used in any of my previous NHibernate samples: FluentNHibernate.dll, Iesi.Collections.dll and NHibernate.dll.  You can get these using the NuGet package manager (or just download my sample and grab them from the 3rdPartyDLLs directory.

I created a session factory to connect to my default Oracle.Instance.  Here’s the code:

public class OracleSessionFactory
  private static ISessionFactory _sessionFactory;
  private static ISessionFactory SessionFactory
      if (_sessionFactory == null)
        _sessionFactory = Fluently.Configure()
        .ConnectionString(“DATA SOURCE=XE;USER ID=username;PASSWORD=pass;“)
        .Mappings(m => m.FluentMappings.Add<ProductTypeMap>())
        .Mappings(m => m.FluentMappings.Add<ProductMap>())
        .Mappings(m => m.FluentMappings.Add<StoreMap>())
        .ExposeConfiguration(config =>
          SchemaExport schemaExport = new SchemaExport(config);
      return _sessionFactory;
  public static ISession OpenSession()
      return SessionFactory.OpenSession();

You’ll need to modify your mappings to something like this:

public class ProductType
  public virtual int Id { get; set; }
  public virtual string Description { get; set; }

public class ProductTypeMap : ClassMap<ProductType>
  public ProductTypeMap()
    Id(u => u.Id).GeneratedBy.Sequence(“PRODUCTTYPE_SEQ“).Not.Nullable();
    Map(u => u.Description).Length(50).Nullable();

The “Sequence” attribute is necessary to declare that the “Id” field is a sequence field.  This is similar to the SQL Server identity declaration.

If you were to build this program in a console application, set your user name and password correct and run the program, you’ll get an error like this:

This is an annoying side affect that can be solved by setting the bitness of the project itself.  Right-click on your project and select “Properties”.  Choose “Build” and change the “Platform Target” from “Any” to “x64” (if you’re using the 64 bit version of Oracle).  Now you can successfully run your application.

From this point on, everything about Fluent NHibernate is the same.  I have included a select, insert, delete and update query in my sample application so you can see all the CRUD operations against an Oracle server.

Testing the Connection Outside Visual Studio

If you’re having difficulty getting your Oracle connection to work you can use the “Data Link Properties” window to test your connection parameters.  Create a text file on your desktop and name it “conn.udl”, then double-click on that file and you’ll see this window:

Select “Oracle Provider for OLE DB” and click the “Next >>” button.  Now fill in your connection information in this window.  Oracle uses “XE” as the default for the Data Source:

Once you click the “Test Connection” button you should get confirmation of a good connection.  Otherwise, you’ll get an error.  If you can get your settings to work in this window, then you’re one step closer to making your Oracle session work.

Sample Application

You can download this sample application and run it on your PC (after you install Oracle XE).  Be sure and set your user name and password in the SessionFactory.cs file.