Orm vs Non-ORM

Summary

This blog post is overdue, but I’d like to talk about my database experiences with and without ORMs.  


Before ORMs

I have been writing database driven software for a long time.  My early days include the use of Borland C++ with Paradox and FoxPro databases.  I have also written Borland C++ and PHP code with Oracle 8i.  I’ve also written C++, then C# to access MS SQL Server.  I’ve used ODBC and ADO to access database servers.  They all have their pros and cons, but they are all the same when it gets down to the actual coding.  The basic coding includes opening and closing a database connection, performing a select or execute query and dealing with the results usually with some sort of reader that only goes forward.  PHP had the ability to select into an array and I have used the Enterprise library with C# to select into a DataSet (which can be treated as an array).  These are convenient because you can go forward and backwards through the result set, but they suffer from the fact that all records must be read before you can access the data.

There are two big headaches with non-ORM data access.  First the query is a string that is passed to the database.  One of the easiest placed to get a syntax bug.  The second headache is the return set, which also uses a string to name the field name used (unless you use the field number relying on ordering…OMG).  This is where the promise of ORMs caught my attention.

ORMs

Object Relational Models looked so cool when they first came out.  The LINQ language didn’t match SQL, but it had built-in syntax checking.  Of course, you can get a query syntactically correct and still get the wrong results, but at least you didn’t misspell “SELECT” or “INSERT” or something trivial like that (I once debugged a program for an hour only to discover that I had spelled “UPDATE” as “UDPATE”).  The really exciting part of the ORM to me was the fact that the result set came back as an object or list of objects.  That makes things really easy on the result end of the query.  With context sensitive programming tools (i.e. Visual Studio), hitting the dot causes a list of the fields to pop-up and voila!  You don’t need to memorize the exact spelling of each field.

My first experience with ORMs was Linq-to-SQL.  This ORM suffers from the fact that you can’t easily regenerate the tables when you add a field in the database.  I switched to Entity Framework because of this headache.  Entity Framework turned out to have some serious performance issues not to mention the massive XML file that stores the data configuration that contains unique keys that are renumbered every time the edmx file is regenerated.  That causes big problems with version control systems.

So I moved on to NHibernate, specifically Fluent NHibernate.  I wasn’t too crazy about NHibernate because of the XML files.  Too much work!  The Fluent add-on makes it nice because all the mappings are in C#.  I did a lot of research up-front and decided that NHibernate was a winner.  After using this product for the past 8 or 9 months, I have to admit it has it’s own set of flaws.  There is a lot of Linq support that is missing from NHibernate such as outer joins.  Some of the field mappings with SQL server has quirks like the 4000 character max problem that crops up everywhere.  Fortunately there are work arounds for this stuff, but there shouldn’t be such glaring problems.

The next issue with ORMs is that I’m not convinced that I have gained any productivity from the use of an ORM (any ORM).  It seems that I spend a ton of time trying to translate the SQL that I want to write (which I’ve done for so many years that I can do complex queries in my head), into Linq.  You also have to be careful about the way you write your Linq code, because ultimately there is a lexer that converts the Linq code into SQL that is transmitted back to the server.  This may end up as really bad (read: inefficient) code.  Before I used ORMs I would workout my complex queries inside Microsoft SQL Server Management Studio and then just copy the text into my code.  Can’t do that with Linq, because it’s not the same syntax.


Conclusion

Am I willing to give up on ORMs and go back to ADO?  Not really.  I like the object list result sets too much to throw in the towel.  I’m also smart enough to know how to profile and test queries that are generated by an ORM.  Finally, I can revert to sending a raw query to SQL and get an object list back, making NHibnernate behave like a micro-ORM (until NHibernate or another ORM comes by and fixes the issues).  I think Entity Framework could be THE killer product if Microsoft focused on it’s two major flaws: Performance and the massive edmx file that is used.  The Entity Framework development team blog indicates to me that they are oblivious to these weaknesses or that these are low on their priority list.  Microsoft suffers from a priority scheme involving features first, performance and reliability last.

OK, I’m going to stop ranting now!

 

SQL Server is Crazy!

If you’ve ever used SQL server for a serious project, you’re already nodding your head at the title of this blog post.  There are many aspects of SQL server that is a bit quirky.  Overall I think SQL Server is a very good but expensive product.  I have experience with Oracle as well, and Oracle has it’s own quirks and it is also an expensive product.  But I’m not here to rate products, or decide which product I think you should use.  I’m going to talk about one of the quirks of SQL Server that you might run into if you’re attempting to work with a legacy product that was built around SQL Server.

Field Names

One of the problems with using an ORM with MS SQL Server is that SQL allows characters in its field names that C# cannot handle.  For example: ‘#’ and ‘@’ are allowed as a character in a field name (‘#’ is not allowed as the first character).  Double-quotes can be in a field name (even the first character can be a quote).  Fields can start with a number (Yikes!).  Don’t believe me?  Check this out:


Now you have to wonder, what would Entity Framework do with this mess?  Let’s find out…


It looks like EF puts a “C” in front to prevent numbers from being the first character and then it turns all unacceptable symbols into underscores.  That could cause a problem if another field name existed… let’s see what happens if another field is already named the name that EF wants to to rename a field name…

  
Now, let’s see what EF does…


OK, it appears that EF will make the second occurrence of “field_one” unique by adding an number to the end of the field name.  Keeping the names unique.


Plural Table Names

I have a table in my example database named “person”.  When you create a LINQ query of this table you end up with this:

using (var db = new sampledataEntities())
{
    var query = (from p in db.people
                             select p).ToList();
}

The name “people” is the DBSet of the “person” object that represents the table “person”.  Now, what if we named the table “persons”?  It will name it “persons”.  Now, let’s create a table named “people”, just to mess up EF. 

Yikes!  It named the record object “person” and it uses “people” as the table name.  That would be confusing if you were working with the person table and your query is actually looking at the people table.  Now let’s add “person”, “persons” and “people” into EF all at once.

The first thing that happened to me is that none of the tables show in the edmx model diagram.  It did, however, create three objects for the tables:

using (var db = new sampledataEntities())
{
    var query = (from p in db.people
                             select p).ToList();

    var query2 = (from p2 in db.people1
                                select p2).ToList();

    var query3 = (from p3 in db.persons
                                select p3).ToList();
}

One of the table objects is named “people”, one is named “people1” and the last is named “persons”.  In order to figure out which object points to which table, you’ll need to open your edmx file with an xml editor (right-click on the edmx file, select “open with”, then choose “automatic editor selector (XML)”).  Then you can search for people, people1 or persons and reverse engineer which table is being accessed.


Summary

Keep this little exercise in mind when you design a database and choose field and table names.  Avoid any symbols that are no acceptable in a C# variable name.  Also, avoid using a plural name on a table name.  Your goal is to choose a table name that will translate into an object name very similar to your table name.  Otherwise, any developer that maintains your code will have to deal with an extra layer of confusion.

 

Fluent NHibernate Custom SQL Types

When mapping data types from SQL server into Fluent NHibernate, the generic mappings work just fine (as long as the data in the field fits the type declared).  In other words, a char(50), a text or a varchar(max) field in SQL server will fit in a string type without an issue.  There is one problem that can occur if you’re attempting to unit (i.e. integration) test a database and you use the SchemaExport.Create() method to generate your tables.  In this instance you’re attempting to map a string into something that the mapper will guess at.  To get it in the type that matches your database, you can declare the SQL server field type in your mappings.

Here’s an example table for storing information about a store (I’ve used this table in previous blog posts):

public class StoreMap : ClassMap<Store>
{
    public StoreMap()
    {
        Table(“sampledata..Store“);
        Id(u => u.id);
        Map(u => u.Name).CustomSqlType(“char(50)“).Length(50).Nullable();
        Map(u => u.Address).CustomSqlType(“char(50)“).Length(50).Nullable();
        Map(u => u.City).CustomSqlType(“char(50)“).Length(50).Nullable();
        Map(u => u.State).CustomSqlType(“char(50)“).Length(50).Nullable();
        Map(u => u.Zip).CustomSqlType(“char(10)“).Length(10).Nullable();
    }
}

You can use any of the mapping types listed here.  I have also successfully used many of the data types that SQL uses such as “money”, “text” and “varchar”.

 

SQLLocalDB Fast Unit Tests

Summary

One of my previous posts described how to setup a MS SQL Lite server called LocalDB.  My little demo project worked really good except for one detail.  It was slow.  The primary problem was that I was starting the instance, creating the database in the unit test initialize (called [TestInitialize]) and then I was dropping the database and stopping the instance in the test cleanup method.  That caused a database create and destroy for each unit test.  Now I’m going to show how to setup your unit tests to create the database once and destroy it once.  I’m also going to describe how to clean out your data for the next test.


The Truncate Table Command

The truncate table command has a couple of nice features: It can delete data fast, and it resets the seed value used for the identity column.  So I setup a list of tables to be truncated and a method that will perform the actual truncation.  Here’s how I implemented the list (which should be contained in your unit test module):

public static class SampleDatabase
{
    public static string[] TableList
    {
        get
        {
            // make sure these are in the right order.
            return new string[]
                {
                “product“,”producttype“,”store
                };
        }
    }

}

The code that handles the truncation looks like this:

public static void TruncateData(string databaseName, string[] tableList)
{
    SqlConnection db = new SqlConnection(
        “server=(localdb)\” + databaseName + “testinstance;” +
        “Trusted_Connection=yes;” +
        “database=” + databaseName + “; ” +
        “Integrated Security=true; ” +
        “connection timeout=30“);

    db.Open();

    foreach (var item in tableList)
    {
        SqlCommand myCommand = new SqlCommand(
           @”TRUNCATE TABLE ” + databaseName + “..” + item, db);

        myCommand.ExecuteNonQuery();
    }
    db.Close();

}

You can look at the sample code to see the details of how this fits together.  Basically, I call my TruncateData method inside the TestCleanup method to truncate the tables after every unit test.  When you create your list of tables to truncate, be aware that if you have constraints on your test database, you’ll have to truncate your tables in the right order.

Now, there is an AssemblyInitialize and an AssemblyCleanup attribute that can be used once inside the unit test project.  These are used with static methods and are called before your unit tests start and after your unit tests end.  I created a cs file named “AssemblyUnitTestShared.cs” that contains this code:

[TestClass]
public sealed class AssemblyUnitTestShared
{
    [AssemblyInitialize]
    public static void ClassStartInitialize(TestContext 
       testContext)
    {
        UnitTestStartAndEnd.Start(“sampledata“);
    }

    [AssemblyCleanup]
    public static void ClassEndCleanup()
    {
        UnitTestStartAndEnd.End(“sampledata“);
    }

}

At this point your unit tests would look something like this:

[TestMethod]
public void test_method_2()
{
    using (ISession unitTestSession = 
           SampleDataTestContext.OpenSession())
    {
        Product product = new Product
        {
            Name = “P 1“,
            ProductId = 1,
            store = 1
        };
        unitTestSession.Save(product);
        unitTestSession.Flush();

        ProductObject productObject = new 
              ProductObject(unitTestSession);

        var productList = productObject.ReadAll();

        Assert.AreEqual(“P 1“, productList[0].Name);
    }

}

When executing these unit tests there will be an initial delay while the database is being created, then all the tests will run in a few seconds time, then there is a delay while the database is being torn down.  Adding more tests will not significantly increase your total unit test execution time.

Here’s the source code, which I would recommend downloading and getting it to work.  This can serve as a model for your project unit tests.

SQLLocalDBBlogPost2.zip


Other Considerations

At this point, I have shown how you can use LocalDB to create a temporary database and run unit tests against methods that contain database operations.  There are still some issues here that should be resolved.

First, what about integrity constraints.  One method is to apply all the constraints to the database after the tables are created (which currently occurs on the first call to the nhibernate context).  The problem with this method is that each unit test would need to incorporate all the required seed data that constrains any tables that are needed for the actual test.  This is a lot of unnecessary work.

Another method is to only generate the constraints at the unit test level and destroy all the constraints at the end of the unit tests.  I have not tried this yet, but there will be some sort of performance penalty.  If the number of constraints is restricted to just the tables that are necessary, then the performance loss will be minimized and the unit tests can be contained within the scope of what is being tested.  

So now we would need to come up with code to generate and destroy these constraints.  One of my older methods of doing this is to create a list of all the constraints in the database and write a method that receives a list of tables.  Any foreign constraints would be added only if both tables were present in the list.  Unique constraints added if the table with the constraint was in the list and so on.  Then the tear-down can be accommodated by scanning the master database for foreign and unique constraints.  This code can all be encapsulated in a common project that you would use with your unit tests (except the table list which you have here and the list of actual constraints, because they are database dependent).  

The next issue surrounds the fact that your database will be updated at some time in the future and you’ll need to be aware that any constraints changed in the database have to be changed in your test package.  There is no simple solution for this problem.  In the past I have created software that reads the tables, fields, constraints, indexes, etc from an excel file.  This excel file was the master definition of everything.  Any database changes would be made to this file and a tool I developed would synch changes with the actual database (when the developer was ready to apply changes).  This excel file was then used by my unit tests and out of synch issues were minimized in this fashion.  

Another issue surrounds the fact that you need to seed your data for a unit test, and this can get very tedious in a hurry.  So I have written classes and methods to read data from an xml file (embedded in the unit test project) to pre-populate the data in a unit test before it runs.  Then I name my xml file with the unit test source name and the method name that calls it.  I use individual files for each method because data can be altered in the future (which could mess up multiple tests if they share one xml file).  Copy and paste is my friend.  The layout of such a file would look something like this:

<data>
  <tablename>
    <field1>data in field 1</field1>
    <field2>data in field 2</field2>
    <field3>data in field 3</field3>
  </tablename>
</data>

One last thing to be aware of.  It seems that LocalDB does not use the FILEGROWTH parameter.  I have not dug too deep on this issue (I just increased my starting database size to 8k), but you might run into an insufficient size problem if you insert too much data into your unit test database.  You can either bump up the size or determine if there is a mechanism to auto expand.

If your looking for the SQLLocalDB package to install:

x64/SQLLocalDB.MSI for 64 bit systems (33.0MB)
x86/SQLLocalDB.MSI for 32 bit systems (27.7 MB)







 

Fluent NHibernate “Not Supported By” Exception

Summary

So my company has successfully switched to using Fluent NHibernate for our production systems.  So far the results are astounding.  One thing our developers have learned is that there is no “perfect” ORM.  Not yet anyway.  Each seems to have a flaw or two.  NHibernate has a couple of issues that can be worked around.  One issue, I mentioned in an earlier post regarding the 1200 parameter limit.  While unexpected, it was easy to see why it was an issue.  The issue I want to talk about today came up when one of our developers was working with a group by query involving other query operations like order by and take, etc. So I’m going to talk about one issue and show a solution.


The Setup

I’m going to use my standard sampledata database in MS SQL server with 3 tables: Product, ProductType and Store.  Here’s the ERD:



I put some seed data into these tables, it doesn’t matter what you put in the tables.  There is a SQL script in the project (named “CreateTables.txt”), you can copy the text inside this file and execute it inside a query window in MS SQL Server Management Studio.


The Ugly

Here’s the offending LINQ that I setup in my console app:

var query =
    (from p in session.Query<Product>()
     join pt in session.Query<ProductType>() 
       on p.ProductType equals pt.Id
     join s in session.Query<Store>() on p.store equals s.id
     group new { p, pt } 
     by new { pt.Id, pt.Description, p.Name } into temp
     select new
     {
      name = temp.Key.Name,
      id = temp.Key.Id,
      description = temp.Key.Description,
      count = temp.Count()
     }).OrderBy(u => u.name).ToList();

If you attempt to execute this query, you’ll get an error like this:


This error only occurred when I added the “OrderBy” onto the query.  So it appears that a combination of query parameters can cause this error to occur, or possibly there is an issue with the anonymous types generated in the “select new” part of the query.  


The Fix

I dug around for an answer to this problem and found this Stack Overflow issue:

Linq query nhibernate; not supported exception

That’s where I discovered a solution to this query.  I added the “AsEnumerable()” before the “OrderBy” to fix this problem:

var query =
    (from in session.Query<Product>()
     join pt in session.Query<ProductType>() 
       on p.ProductType equals pt.Id
     join s in session.Query<Store>() on p.store equals s.id
     group new { p, pt } 
     by new { pt.Id, pt.Description, p.Name } into temp
     select new
     {
      name = temp.Key.Name,
      id = temp.Key.Id,
      description = temp.Key.Description,
      count = temp.Count()
     }).AsEnumerable().OrderBy(u => u.name).ToList();


Now the query executes and produces the expected results.

You can download the sample project here:
FluentNHibernateGroupBlogPost.zip

 

Unit Testing Fluent NHibernate

Summary

I’ve covered a little unit testing of NHibernate in the past.  I showed how to use SQLite, which is the most common method of unit testing (technically, integration testing) an NHibernate application.  If you’re using MS SQL server for your application, SQLite has some serious limitations that could hinder your unit tests.  So I’m going to show you how to use Microsoft’s LocalDB.


The Problems with SQLite

One of the first issues I discovered with SQLite is that it doesn’t do multiple databases.  Simulating a multi-database join in a unit test is not workable with SQLite.  The second and most annoying “feature” of SQLite is that it doesn’t behave well with a command line MSTest.  There are work-arounds, including the fact that you have to create a test settings file and point your script to that file to make MSTest work with SQLite unit tests.  Other issues surround the 32bit/64bit dll’s and the difficulty in making it all work on Teamcity.


Microsoft SQL Express and LocalDB

Apparently, Microsoft came up with a solution to SQLite, called LocalDB.  It’s a version of SQL Express that is lightweight and fully-featured.  The 64-bit install is only 33MB in size and there is no special driver for NHibernate.  If you are going to develop for MS SQL server, then the driver is the same for LocalDB.

Here’s the first document I stumbled across when I was looking for this product: Introducing LocalDB, an improved SQL Express.  In order to get things rolling quick, you can go to this article: Getting Started with SQL Server 2012 Express LocalDB.


The Sample Project

So I put together a project based on my previous example with a Student, Teacher and Class table (I renamed “Class” into “SchoolClass” to prevent a clash with the keyword “Class”).  I added a test project onto the solution and created a special context for the purpose of my unit tests.  Here’s how the code looks:

public class TestSessionFactory
{
    private static ISessionFactory SessionFactory
    {
        get
        {
            return Fluently.Configure()
            .Database(MsSqlConfiguration.MsSql2005
            .ConnectionString(“Server=(localdb)\testinstance;Integrated Security=True”))
            .Mappings(m => m.FluentMappings.Add<SchoolClassMap>())
            .Mappings(m => m.FluentMappings.Add<TeacherMap>())
            .Mappings(m => m.FluentMappings.Add<StudentMap>())
            .ExposeConfiguration(cfg => new SchemaExport(cfg).Create(true, true))
            .BuildSessionFactory();
        }
    }
    public static ISession OpenSession()
    {
        return SessionFactory.OpenSession();
    }

}


The first thing you’ll notice is that I was able to use the “Add” for the fluent mappings.  SQLite doesn’t allow this.  This can be handy if you have duplicate table names in separate databases, but create your class and mappings in one project.  You have fine-grain control over which tables will appear in your context.

The second thing to note is that you can tell the session factory to generate a schema when it starts.  If you were to use this session factory in your unit tests as-is, you’ll get nothing.  That’s because Fluent NHibernate will not create the initial databases.  For that, I wrote another method:

public static void Create(string databaseName, string instanceConnection)
{
    string databaseDirectory = Directory.GetCurrentDirectory();

    SqlConnection db;
    db = new SqlConnection(“server=” + instanceConnection + “;” +
                            “Trusted_Connection=yes;” +
                            “database=master; ” +
                            “Integrated Security=true; ” +
                            “connection timeout=30“);

    db.Open();

    SqlCommand myCommand = new SqlCommand(@”CREATE DATABASE [” + databaseName + @”]
        CONTAINMENT = NONE
        ON  PRIMARY 
        ( NAME = N’” + databaseName + @”‘, FILENAME = N’” + databaseDirectory + @”” + databaseName +
                                                @”.mdf’ , SIZE = 3072KB , FILEGROWTH = 1024KB )
        LOG ON 
        ( NAME = N’” + databaseName + @”_log’, FILENAME = N’” + databaseDirectory + @”” + databaseName +
                                                @”_log.ldf’ , SIZE = 1024KB , FILEGROWTH = 10%)
        “, db);

    myCommand.ExecuteNonQuery();
    db.Close();

}

This method is contained in the “TestDatabase” class along with a “Remove” method to undo the database creation.  I generated the database create command from the SQL Server management console.  I made the application execute directory (bin directory) the default location for the database files to be created, using the Directory.GetCurrentDirectory() method.


The Unit Tests

The unit tests have an initialize and a cleanup method.  The Initialize will setup the LocalDB instance and create a database.  The cleanup reverses the operation by dropping the database first, then shutting down the instance and deleting it.  

The ms test initialize method looks like this:

[TestInitialize]
public void Initialize()
{
    // … removed code not discussed

    // create a new localdb sql server instance
    startInfo = new ProcessStartInfo
        {
            WindowStyle = ProcessWindowStyle.Hidden,
            FileName = “cmd.exe“,
            Arguments = “/c sqllocaldb create “testinstance” -s
        };

    process = new Process { StartInfo = startInfo };
    process.Start();
    process.WaitForExit();

    TestDatabase.Create(“facultydata“, “(localdb)\testinstance“);
    TestDatabase.Create(“studentdata“, “(localdb)\testinstance“);

}


I used the Process class to execute a command line prompt to kick-off the LocalDB instance.  I named this instance “testinstance”.  That is used inside the context when Fluent NHibernate connects to the database.  It can also be used by the SQL Server management console to connect and troubleshoot problems.  Use the “(localdb)testinstance” syntax to connect to the database.


The unit tests look like this:

[TestMethod]
public void ThirdTest()
{
    using (ISession db = TestSessionFactory.OpenSession())
    {
        Teacher teacher = new Teacher
        {
            Name = “test teacher
        };

        db.Save(teacher);

        teacher = new Teacher
        {
            Name = “test teacher2
        };

        db.Save(teacher);

        var query = (from t in db.Query<Teacher>() select t).ToList();

        Assert.AreEqual(2, query.Count);
    }

}


You can download the sample code here:

SQLServerLocalDBTest.zip


  

 

Fluent NHibernate Subquery Stuff

Summary

I was converting some Entity Framework Linq to use NHibernate when I came across a strange error.  At first I was baffled by the error message because it was telling me that there were too many input parameters.  Doing an internet search led me to articles about how to limit the number of parameters I was passing to a stored procedure, etc. etc.  We’ve all run into these types of issues, the error message was not quite right.  Or at least until I figured out what was really going on.


First, the Error Message

The error message I was getting was this:

“The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.”

The query that produced this message had only one parameter, but I can recreate the error message (now that I know what causes it), and here’s an example of some code that can cause the error:

List<int> personList = 
    (from p in db.Query<Person>() 
     select p.id).ToList();

var departmentList =
    (from d in db.Query<Department>()
     where !personList.Contains(d.id)

     select d.name).ToList();

First, I had to create a bunch of person records, in fact I inserted over 3000 records, just to test this.  I also created 4 department records, but they don’t really matter.  When I checked what the second query was generating I realized that the first query was generating a list of id’s (duh) that were put into a “NOT IN” statement in the second query, which looks something like this:

select department0_.name as col_0_0_ from sampledata..department department0_ where  not (department0_.id in (@p0 , @p1 , @p2 , @p3 , @p4 , @p5 …

So I did some more searching on the internet and there were some insane ideas, like looping the second query and pumping 1000 variables in at one time… Here’s an example: how-can-i-use-nhibernate-to-retrieve-data-when-the-where-in-have-thousands-o.  Yikes.

Let’s face it, MS SQL server can handle a subquery with ease, so I decided to short-cut this special case (and let’s face it, subqueries are special cases only) by calling the query directly.  Here’s the code I used:

var departmentList =
  db.CreateSQLQuery(@”SELECT name 
           FROM department 
           WHERE id NOT in 
           (SELECT department FROM person)”)
    .SetResultTransformer(
        Transformers.AliasToBean<DepartmentNameRecord>())
    .List<DepartmentNameRecord>()
    .ToList();
The result will be the list of departments not containing any person records.  If you download the code and experiment with it, you can use these query commands in MS SQL to clear your tables:
delete from department
DBCC CHECKIDENT (department, RESEED, 0)
delete from person
DBCC CHECKIDENT (person, RESEED, 0)

If you don’t have the tables setup in your sampledata database, you can use these queries to create them before running your code:

USE [sampledata]
GO
/****** Object:  Table [dbo].[person]    Script Date: 05/18/2014 11:44:14 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[person](
[id] [int] IDENTITY(1,1) NOT NULL,
[first] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[last] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[department] [int] NOT NULL,
 CONSTRAINT [PK_person] PRIMARY KEY CLUSTERED 
(
[id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

CREATE TABLE [dbo].[department](
[id] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 CONSTRAINT [PK_department] PRIMARY KEY CLUSTERED 
(
[id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF
Did I also mention that the above code is blazing fast?  Oh yeah.  It’s always best to profile your code after you get it working.  Then dump the raw SQL that your ORM is producing so you can analyze it in MS SQL management studio.  If it’s running fast in the ORM, then you’re good.  If there are performance problems, then maybe you need a workaround.

Download the code here: FluentNhibernate2100ErrorTest.zip

 

Fluent NHibernate More Stored Procedures Without Mapping

I did a sample post on how Fluent NHibernate executes a stored procedure, but I didn’t dig into the subject and show some details.  The sample I showed returned a list of tables that were already mapped and it made the whole sample easy, but not realistic.  In most cases we want to execute a stored procedure to return some result that is not already mapped.  So I’m going to show how this is done.


The Stored Procedure:

Use this code to generate an MS SQL stored procedure in your sampledata database (assuming you have one):

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go


CREATE PROCEDURE [dbo].[ReadPersonNames]

AS
BEGIN
SET NOCOUNT ON;

SELECT first, last FROM person

END


The Stored Procedure Return Class

You’ll need to create a class to contain the data type that will be returned.  Here’s what I created for this example:

public class PersonNameRecord
{
    public virtual string first { get; set; }
    public virtual string last { get; set; }

}


Call the Stored Procedure

Here’s my call to the stored procedure:

var personNameList = db.CreateSQLQuery(“exec ReadPersonNames“)
 .SetResultTransformer(Transformers.AliasToBean<PersonNameRecord>())
    .List<PersonNameRecord>()

    .ToList();


You’ll need to include a “using” statement at the top to make this work:

using NHibernate.Transform;


That’s it.  You can download the sample code here:

FluentNHibernateStoredProc2.zip

 

Fluent NHibernate Table Dump Utility

Summary

So I’ve been posting a lot about NHibernate over the past few months (Fluent NHibernate to be exact).  Mostly motivated by frustrations with Entity Framework.  I have spent the past week working with NHibernate in a production environment as a proof of concept example.  The first problem with NHibernate is the fact that you have to manually generate the classes and mappings.  So I came up with a simple and dirty solution to this problem…


The Table Dump Utility

I threw together a console application that can be used to dump all the tables of designated databases into directories.  Each table generates a “.cs” file with the class and mapping code in it.  You can download the console application here:

GenerateNHibernateTables.zip

To use the utility, change the “ConnectionString” at the top to point to your database.  Then you can change the parameter of the “DumpDatabase()” method to contain the name of the database you wish to generate tables for.  You can add additional “DumpDatabase()” methods if you want to dump a bunch of databases.  Each database will be contained in a subdirectory named the same as the database name.  All the subdirectories will be contained inside a directory on your C: drive named “NHDatabase”.  Obviously, you can hack up this program to suite your needs.  


What this program will generate

This program will generate a “.cs” file for each table that will contain the class and mapping for the table.  I have quickly thrown in some data types and what the matching class virtual getter will be (i.e. “int”, “string”, “DateTime”, etc).

Composite keys are recognized and the “Equals” and “GetHashCode” methods are automatically inserted into the class with the proper field names.

The “Table” mapping is included for convenience and can be altered.  This is setup for MS SQL server, containing the database name as well as the table name.

Nullable and not Nullable fields are accounted for.

String lengths are accounted for.


What this program will not generate

No foreign key constraints are generated.

No one-to-many or many-to-many mappings are setup.

Only the most generic field types are recognized.

No context code is generated.


What can you use this program for?

I wrote this program to reduce some of my tedious data-entry work.  You can get about 90% of the code you need for your Fluent mappings and table classes with this utility, then spend a few minutes cleaning up anything that doesn’t quite fit.  If a field type is not recognized, then the program will spit out an “unknown(sql field name)” field name.  You can decide what to name this variable and type over the unknown name.

Also, don’t forget to correct the namespace.  I threw a generic “databasename.NameSpace” text for the name space.  You should change this to match your project namespace.  You can do it in the program before you generate tables to save yourself some typing.

Also, you can rerun this program to write over the tables that exist.  Make sure you don’t have a directory open before you do it (otherwise you might lock the program out from making changes).

To use the files, just drag the “.cs” files into your project, tweak the namespace, correct any syntax errors and add your tables to your own context class.

That’s it!

 

Fluent NHibernate Composite Keys

Summary

In a previous blog post, I demonstrated a composite key on a table that was used as a reference between two other tables (a many-to-many example).  This time I’m going to demonstrate the basic mapping of a composite key for a table that will be defined as a class.  This is a one-to-many example, but I’m going to ignore that part of the process and skip right to the meat of the situation.  I did some research on the internet and stumbled across this article: MyWiki: Fluent NHibernate Samples, which has exactly the example I was looking for.


The ERD

I setup a pair of tables with the minimum fields I needed to demonstrate how to do some basic CRUD operations.  The ProductId and ProductType fields inside the Product table are a composite primary key.  I did not make either key an identity key.  I’m going to just put numbers into these fields when I insert data.  I would not recommend this practice since two concurrent operations might cause a duplicate key error on insert.




The ProductType Table  Class and Mapping

I’m going to show the ProductType table definition first.  It’s just a straight mapping of the most basic kind.  There is one primary key that is an identity called “Id” and a description string field that is nullable.

public class ProductType
{
    public virtual int Id { get; set; }
    public virtual string Description { get; set; }
}

public class ProductTypeMap : ClassMap<ProductType>
{
    public ProductTypeMap()
    {
        Id(u => u.Id).GeneratedBy.Identity();
        Map(u => u.Description).Nullable();

        Table(“sampledata..ProductType“);
    }
}


The Product Table Class and Mapping

The tricky table is the product table since it has a composite primary key.

public class Product
{
    public virtual int ProductId { get; set; }
    public virtual int ProductType { get; set; }
    public virtual string Name { get; set; }
        
    public override bool Equals(object obj)
    {
        if (obj == null || GetType() != obj.GetType())
        {
            return false;
        }

        Product that = (Product)obj;

        return this.ProductId == that.ProductId && 
            this.ProductType == that.ProductType;
    }

    public override int GetHashCode()
    {
        return ProductId.GetHashCode() ^ 
            ProductType.GetHashCode();
    }
}

public class ProductMap : ClassMap<Product>
{
    public ProductMap()
    {
        CompositeId()
        .KeyProperty(u => u.ProductId)
        .KeyProperty(u => u.ProductType);

        Map(u => u.Name).Nullable();

        Table(“sampledata..Product“);
    }
}

The first thing you’ll notice is that there are two overridden methods: The “Equals” method and the “GetHashcode” method.  These must be manually handled by your code.  Next, the mapping must contain the “CompositeId()” method to define which fields belong to the composite key.  This replaced the “Id” mapping method.


The CRUD Code

I wrote a select, an insert and a delete example.  My starting data included the following:



The top 4 rows are the Product table and the bottom 4 rows are the product types.  When I do my insert, I’m going to intentionally insert a key that contains a Product Id of 1, but a different ProductType key.  This will make the composite key unique, but ensure that there is nothing messed up with the primary key being setup as the ProductId.


using (ISession db = MSSQLSessionFactory.OpenSession())
{
    var query =
        (from p in db.Query<Product>()
         join pt in db.Query<ProductType>() on 
            p.ProductType equals pt.Id
         select p).ToList();

    foreach (var item in query)
    {
        Console.WriteLine(item.Name.Trim());
    }
}

// insert new records
using (ISession db = MSSQLSessionFactory.OpenSession())
{
    Product product = new Product()
    {
        ProductId = 1,
        ProductType = 2,
        Name = “Onion
    };

    db.Save(product);
    db.Flush();
}

// delete one record
using (ISession db = MSSQLSessionFactory.OpenSession())
{
    var product =
        (from p in db.Query<Product>()
         where p.Name == “Onion
         select p).ToList();

    using (db.BeginTransaction())
    {
        foreach (var item in product)
        {
            db.Delete(item);
        }
        db.Transaction.Commit();
    }
}

If you run this example, you can put a break-point before the delete command and see that the “Onion” product was insert into the product table.  Then you can continue and the product will be deleted from the table.

You can download the sample code here:

FluentNhibernateCompositeKey.zip