Visual Studio Project File

Summary

This blog post is written for developers that are new to Visual Studio, or have not ventured outside the programming environment itself.  I’m going to show a few handy features that are available in the project file itself.

The Project File

Any time you create a new project a project file gets created.  Your first project will be inside of the solution folder and be contained inside the solution.  Each project you add to a solution will create a new project folder and project file.  The project file extension for a C# project is “csproj”.  This file is an xml file which you can edit with any text editor.

The solution file can also be edited and this can be handy if you have to fix a problem.  One problem that occurs quite often is if you rename a project folder from windows.  You can just edit the .sln file and change the project directory name.


BeforeBuild and AfterBuild

Back to the project file…

At the bottom of the project file is a commented section called BeforeBuild and AfterBuild.  The purpose of this section is to add commands to perform tasks before a build starts or after a build ends.  I have used this section to copy the newly built dlls into the destination directory.  First, you’ll need to remove the comments:

<!– To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.Common.targets.
<Target Name=”BeforeBuild”>
</Target>
<Target Name=”AfterBuild”>
</Target>
–>



Next, you can add a copy command inside the AfterBuild xml tags:

<Target Name=”BeforeBuild”>
</Target>
<Target Name=”AfterBuild”>

<Copy SourceFiles=”$(TargetDir)$(ProjectName).exe” 
  DestinationFolder=”.” ContinueOnError=”true” />
</Target>

The current directory specified will be the project directory.  By using the $(TargetDir) variable, you can grab the dll or exe file from the directory that it was just built to.  Otherwise you’ll need to know if it went into the BinDebug or BinRelease folder.  Using the $(ProjectName) variable guarentees that the filename is correct, since this will be the name of the exe.  However, if you want to copy other dlls or files, you’ll need specify them directly.

What is a good use for the BeforeBuild?

Now you might be contemplating why you would want to put anything in the BeforeBuild section.  You might have a project where you would generate some C# source code depending on data in a database.  An instance is similar to an ORM generator such as the NHibernate code generator that I developed (see some of my previous blog posts).  In such a case, you might want the convenience of regenerating your ORM definition just before building a particular application.

You could also have a project that uses a language generator such as YACC or ANTLR.  The language generator would be called before build, then the resulting C# source could be copied into your project and then the build could proceed to build your program.

Troubleshooting BeforeBuild and AfterBuild

 Create a new console project and edit the project file.  Change the AfterBuild to this:

<Target Name=”AfterBuild”>
<Copy SourceFiles=”no.exe” DestinationFolder=”.” ContinueOnError=”true” />
</Target>

Don’t forget to uncomment this section.  Now open your project and select Build -> Rebuild Solution.  You should see a warning:

As you can see, there is an error that the file was not found, then it tells the file that the error was in, the line number and column.  If you remove the “ContinueOnError” section, you’ll get an error message instead of a warning.  Your build will stop at that point, though the normal build has completed because this occurs after your build.  If you get an error in the BeforeBuild section, your build will not occur.

List of all Macros Used by BeforeBuild and AfterBuild

You can go to Micorosofts site to get a list of all the macros that are available:

Macros for Build Commands and Properties 

 

Unit Testing Methods that Call Stored Procedures

Summary

One of the biggest problems I used to find in unit testing is that most of the products I work on contain mostly methods that access a database.  I found that most objects and methods work with data and it’s very tedious to try and mock or fake the database component.  If you’ve followed this blog for some time you’ll know that I have an easy and effective way around this problem.  I have a framework that I developed that enables me to load a SQLLocalDB instance, replicate the tables, views, stored procedures and even constraints before a unit test is executed.  In this blog post, I’m going to demonstrate how to unit test a method that contains a call to a stored procedure.  I’m going to show two different methods you can use to perform this operation.

What Are We Testing.  The Full Database Method.

First, we need to determine what we’re really testing.  If you go to my GitHub account here, and download the NHibernate mapping generator, you can use this to extract all the table, view and stored procedure definitions from a selected database (you can ignore or delete the NHibernate ORM stuff if you’re not going to use NHibernate).  This solution contains a project called “Helpers” that contains a framework for setting up a SQLLocalDB instance and can create a database using the source you generate when you run the NHibernate generator tool.  In order to use this framework, you can then create unit tests that stuff information into the SQLLocalDB database instance, execute your method, then read the results from the database (or test results returned from the method under test).

Let’s create an example..

First, open up sql server and create a table and a stored procedure.

CREATE TABLE [dbo].[person](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [first] [varchar](50) NULL,
    [last] [varchar](50) NULL,
 CONSTRAINT [PK_person] PRIMARY KEY CLUSTERED
(
    [id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON
[PRIMARY]

GO

CREATE PROCEDURE [dbo].[ReadPersonNames]
AS
BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from
    — interfering with SELECT statements.

    SET NOCOUNT ON;

    — Insert statements for procedure here
    SELECT first, last FROM person
END
GO

Insert some data into your person table:

INSERT INTO person (first,last) VALUES (‘Mary‘,’Jane‘)
INSERT INTO person (first,last) VALUES (‘Joe‘,’Cool‘)

Now, download the NHibernate Mapping Generator program from GitHub (click here).  There are several projects inside the solution.  Make sure that the project named “NHibernateMappingGenerator” is set as the starting project.  Then hit F5 (or click the run/start button).   In the drop-down list box, select your database instance (MS SQL Server).  A list of databases inside that server will display in the check list box below the drop-down.  Then check the database that you created the table and stored procedure above, then click the generate button.  When the generate button turns disabled, then you’re done.  Exit from the program.  You might receive a “File Modification Detected” error, this is normal.  Just click “Reload” and you’ll see your database added to the NHIbernateDataLayer project.

Create a new console project.  Add an existing project to your solution and navigate to the helpers project (or you can copy the helpers from the NHibernate solution into your solution).  Make sure you add a reference to the helpers project from your main project and add a “using Helpers;” at the top.

Next, add a unit test project to your solution.  In your unit test project you will need to reference the Helpers and your console project.  Also, include the using statements at the top.

Add this code to your main program:

class Program
{
    static void Main(string[] args)
    {
        var personSelector = new PersonSelector();
        DataSet data= personSelector.ReadNames();

        foreach (DataRow row in data.Tables[0].Rows)
        {
            Console.WriteLine(row[“first“].ToString() + ” “ +
               row[“last“].ToString());
        }

        Console.ReadKey();
    }
}

public class PersonSelector
{
    public DataSet ReadNames()
    {
        using (var db = new ADODatabaseContext(“Server=YourServer;
            Initial Catalog=sampledata;Integrated Security=True“))
        {
            return db.StoredProcedureSelect(“sampledata.dbo.ReadPersonNames“);
        }
    }
}

As you can see, I’m just using ADO to read the records from the person table using the stored procedure read method defined in the Helpers project.  There are no parameters to this stored procedure, it just reads the first and last names of the entire table.  You should be able to execute this tiny program and see the first and last names of each record listed (Mary Jane and Joe Cool if you used my insert statements earlier).  If your database is not named “sampledata” then you’ll need to change that in the code above.

Now we’ll need to add a database context project to your console application.  This will contain the cs files created by the NHibernate mapping generator.  First, create a new class library project named “Data”.  Inside the NHibernateDataLayer project you’ll find a subdirectory with the name of your database, inside that will be a folder named “TableGeneratorCode”, and it will contain a cs file named “sampledataTableGeneratorCode”.  Copy that file into your “Data” project.  You’ll have to include the helpers project and add your using statement.

Next, you’ll need to copy over the stored procedure generator code.  This will be in the NHibernateDataLayer project, in the folder named after your database, in a subfolder named “StoredProcedures” and there will be a cs file for each stored procedure that was found in your database.  Grab the file for the one stored procedure named “ReadPersonNames.cs”.  Copy that file into your “Data” project.  Don’t forget to include the Helpers using clause.

Now for the unit tests.  You’ll need to create a class that will initialize the SQLLocalDB.  I usually call this source “AssemblyCommon.cs”.  You’ll need an assembly cleanup and initialize method.  Before you start, add a reference to the Helpers and Data projects, then add these using statements:

using Helpers;
using Data;


Now add in the code below:

[TestClass]
public class AssemblyCommon
{
    [AssemblyInitialize]
    public static void ClassStartInitialize(TestContext testContext)
    {
        UnitTestHelpers.Start(“storedprocinstance“, new string[] { Data.sampledataTables.DatabaseName });

        // create tables
UnitTestHelpers.CreateAllTables(Data.sampledataTables.TableList, Data.sampledataTables.DatabaseName);
        ReadPersonNames.Instance.CreateStoredProcedure(UnitTestHelpers.InstanceName);
    }

    [AssemblyCleanup]
    public static void ClassEndCleanup()
    {
        UnitTestHelpers.End();
    }
}

The assembly initialize code will start up an instance named “storedprocinstance”.  You can put a breakpoint at the end of the AssemblyInitialize method and then startup SQL Server Management Studio and go to a database named “(localdb)storedprocinstance” and see the database that is running.  This can be handy for troubleshooting your unit test data.

Next, go to your unit test cs file and add a TestCleanup method:

[TestCleanup]
public void Cleanup()
{
    UnitTestHelpers.TruncateData();
}


This will reset your database between unit tests without tearing down the whole database and rebuilding it.

You’ll have to add a reference to System.Data and a using statement at the top (which will also need a reference to System.XML, but you won’t need a using statement for that one).  Then add the following unit test method:

[TestMethod]
public void TestMethod1()
{
    using (var db = new ADODatabaseContext(“TEST“, “sampledata“))
    {
        db.ExecuteNonQuery(“INSERT INTO person (first,last) VALUES 

          (‘Mary’,’Jane’)“);
        db.ExecuteNonQuery(“INSERT INTO person (first,last) VALUES 

          (‘Joe’,’Cool’)“);
    }

    var personSelector = new PersonSelector();
    DataSet data = personSelector.ReadNames();

    Assert.AreEqual(2, data.Tables[0].Rows.Count);
}


This unit test starts with the instance already in place.  The person table is already created and the stored procedure exists.  All you need to focus on is your unit tests.  In the sample above, I opened an ADO context into the database and inserted two records (you can use Entity Framework or any other ORM for your unit tests, treat it like a normal SQL server).  Next, I instantiate the object and call my method that I wish to test.  Then I check the result with an assert.  In this case, I’m just checking to see if two rows were inserted.



What Are We Testing?  The Fake Database Method.

In this example, I’m going to show you how to setup the same unit test, except I will not be using the person table.  Why would I do this?  In or previous “toy” example, there is one stored procedure that calls one table and selects data.  Very simple.  In a real database or a legacy system, there will be stored procedures that call dozens of tables containing data that is related to each other and possibly tables that are not contained in the stored procedure.  You might not need to setup all that data just to test the method you want to test.  Realistically, your method only sees the results from your stored procedure depending on variables that it passes to that procedure.

What I’m going to do in this unit test setup is I’m going to create a fake unit test that is named the same as the unit test that the method under test expects.  Except I’m going to fake the output with a simple select statement that uses no tables.  Then I’m going to execute the method and see if my results are as expected.  Then in another unit test, I will create a new fake stored procedure with the same name (by dropping any currently created stored procedures in the database and replacing it).

Use the solution created in the sample above.  First, add a directory in your unit test project and name it “TestData”, then add two text files to this directory named “TestFakeSproc1.sql” and “TestFakeSproc2.sql”.  You’ll need to select these two files and then in the properties window, set the files to “Embedded Resource”:


This will ensure that the test data that you are going to create will be embedded in the unit test dll.  This will prevent any directory issues that you’ll encounter if you’re using an automated deployment product like TeamCity.

Now, add some SQL to the first sql file:

USE [sampledata]
GO

IF exists (SELECT * FROM sys.objects WHERE name = N’ReadPersonNames’ AND type = N’P’)
BEGIN
    DROP PROCEDURE ReadPersonNames
END
GO

CREATE PROCEDURE [dbo].[ReadPersonNames]
AS
BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from
    — interfering with SELECT statements.

    SET NOCOUNT ON;

    — Insert statements for procedure here
    SELECT
        ‘LisaAS first,
        ‘SmithAS last
    WHERE
        1 = 1
END
GO


Now add the first unit test method:

[TestMethod]
public void TestFakeSproc1()
{
UnitTestHelpers.ExecuteSQLCode(“UnitTestPersonSelector.TestData.TestFakeSproc1.sql“);

    var personSelector = new PersonSelector();
    DataSet data = personSelector.ReadNames();

    Assert.AreEqual(1, data.Tables[0].Rows.Count);
    Assert.AreEqual(“Lisa“, data.Tables[0].Rows[0]
           [“first“].ToString());
}


The full pathname to an embedded resource will be the namespace followed by the subdirectory name followed by the file name.  If you don’t get this right, then you’ll get an error at the streamreader portion of the ExecuteSQLCode() method.

In this unit test, the stored procedure that is created will just return one row of data that is hard-coded.  No tables were read and the method will receive exactly one row of data no matter what the inputs are.  The point of this unit test is to test the method itself given the pre-planned data that will be produced by the stored procedure.  Now let’s do another fake stored procedure just to show how this works for multiple unit tests.

Inside TestFakeSproc2.sql, copy this text:

USE [sampledata]
GO

IF exists (SELECT * FROM sys.objects WHERE name = N’ReadPersonNames’ AND type = N’P’)
BEGIN
    DROP PROCEDURE ReadPersonNames
END
GO

CREATE PROCEDURE [dbo].[ReadPersonNames]
AS
BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from
    — interfering with SELECT statements.

    SET NOCOUNT ON;

    — Insert statements for procedure here
    SELECT
        ‘LisaAS first,
        ‘SmithAS last
    UNION
    SELECT
        ‘JoeAS first,
        ‘CoolAS last

    UNION
    SELECT
        ‘AdamAS first,
        ‘WestAS last
  
    WHERE
        1 = 1
END
GO

Now create this unit test:

[TestMethod]
public void TestFakeSproc2()
{
UnitTestHelpers.ExecuteSQLCode(“UnitTestPersonSelector.TestData.TestFakeSproc2.sql“);

    var personSelector = new PersonSelector();
    DataSet data = personSelector.ReadNames();

    Assert.AreEqual(3, data.Tables[0].Rows.Count);
}

This method will count the number of rows as 3.  You can setup your select statement to return anything you want to simulate.


Download the Sample Code

You can go to my GitHub account and download the entire functional unit testing sample by clicking here.





 

Loading Data From an API Quickly

Summary

One of my areas of expertise is in mass data transfers using API’s, FTP, WebDAV or SOAP connections.  In this post I’m going to talk about techniques that can be used to increase the throughput of accessing data from a Web API and putting it in a database.  I’m also going to discuss issues that occur and how to get around them.


Accessing Data From the Web

When data is read from a web-based API, it is typically limited to small bite-sized chunks of data.  These chunks can be in the mega-byte range, but typically a web connection has a time limit.  In many applications the data is divided by customer or account number or some other type of data division.  This requires your application to access the web API many times to get the full set of data or all the updates.

Let’s say for instance, you want to create a windows service that connects to a Web API and downloads data for 1000 customer accounts.  This process will be performed hourly to keep the customer data up to date (I’m making this example up, but it’s similar to some of my experience).  If you were to create a program that loops through and access the API for each account your timing would look something like this:


As you can see there is a significant amount of “wasted” time waiting for the connection to occur.  If you are downloading your data into a temp file and then inserting it into a database, you also have the time taken to save to a temp file before database insertion starts.

The best way to increase the speed of your download is to apply multiple connections in parallel.  This will cause your timing to overlap so there are some connections that are establishing and some that are downloading.  In the case of using a temp file as I just mentioned, you can also get an overlap of threads that are saving to temp files while one or more is inserting into a database. The timing resembles something like this:


If your process was performing nothing but inserts into a database, you would not gain much by parallelizing the process.  This is due to the fact that your database can only insert data so fast and that would be your bottleneck.  With an API connection, the bottleneck could be your network bandwidth, the speed of the remote web server, your local server running the service or the insertion speed of your database.  Parallelizing the process will fix your problem with the connection wait time and that may be the only time you can take advantage of (assuming your network bandwidth is maxed when you are downloading data).  To find out what your optimum speed is, you’ll have to run a few tests to see how fast you can download the same data using varying numbers of threads.


Here’s what you’ll typically see:

If you do a quick and dirty Excel chart, it should look like this:

 As you can see the speed will plateau at some point and you’ll need to make a decision on how many parallel processes are enough.  Typically, the choice of how many parallel processes will be decided by how much trouble you’re having with getting the data reliably.  Most of the time, if this involves a database, it’ll be the frequency of deadlocks that you’ll run into.

Decoupling the Tasks

Another possible solution you can use to possibly increase the throughput of your system is to decouple the process of downloading the data into temp files and importing it into your database.  You would write two windows services to perform separate tasks.  The first service would connect to the web servers using parallel processes and create a directory of temp files that would be used as a buffer.  You would need some sort of cut-off point, total megabytes used or maybe number of files downloaded.  Your service would do nothing more than read the data and create temp files.  The second service would read any temp files, insert the data into the database and delete the temp files.  You can multi-thread this application or maybe you need to keep it sequential to avoid database contention.  This service would check the temp file directory at a set interval and process all files until completed when it detects data to import. 


Database Deadlocks

Once you begin to use parallel processes, you’re bound to run into deadlocks.  I typically try to increase the number of processes until I run into lots of deadlocks, then I attempt to fix the deadlock problem, followed by backing down the number of parallel processes.  Deadlocks are difficult to troubleshoot because they are an asynchronous or difficult to repeat reliably. 

To reduce deadlocks you’ll have to be aware of how your data is being changed in your database.  Are you performing only inserts?  If there are deletes, then you’ll need to limit the number of deletes per transaction.  SQL Server typically escalates deletes to a table lock if you are attempting to delete more than 500 records at a time: How to resolve blocking problems that are caused by lock escalation in SQL Server.

Here is another good resource of deadlock resolution: INF: Analyzing and Avoiding Deadlocks in SQL Server

If your deadlocks are occurring when performing updates, you might want to index the table to use rowlocks.  Adding a rowlock hint to your queries might help, but it is only a “hint” and does not force your queries to follow your desired locking plan.  Don’t assume that a table locking hint will solve the issue.  Also, the nolock hint does not apply to inserts or deletes.

One last method to get around persistent deadlocks, though I normally only add this as a safety net, not to “solve” a deadlock problem: You can create code to retry a failed operation.  If you perform a retry, you should use a back-off escalation technique.  What I mean by this, is that the first error should cause your thread to wait one second and retry, then if it errors again, wait two seconds, then the next error would wait four seconds and so on.  You’ll also need to account for the possibility of a complete failure and stop retrying.

Communication Failure

One other aspect of this program that you’ll have to anticipate is some sort of communication failure.  The remote web servers might be down, your internet connection might fail, or possibly your database is off-line.  In each of these situations, you’ll probably need a contingency plan.  Typically, I reset the schedule and exit.  You can get sophisticated and check to see if all your threads have failed, and then exit and set your scheduled update to an hour from now, or a day from now.  This will cause your system to stop retrying over and over, wasting resources when the remote web servers might be down for the weekend.  

Logging

This is one aspect of software that seems to be overlooked a lot.  You should probably log all kinds of data points when creating a new program.  Then remove your debug logging after your software has been deployed for a  month or two.  You’re going to want to keep an eye on your logs to make sure the program continues to run.  Some problems that can occur include things like memory leaks or crashes that cause your service to stop completely.  You’ll want to log the exception that occurs so you can fix it. Your temp files might also fill up the hard drive of your server (watch out for this).  Make sure any temp files that you create are properly removed when they are no longer needed.  

When I use temp files on a system, I typically program the temp file to be created and removed by the same object.  This object is typically called with a “using” clause and I incorporate an IDiposable pattern to make sure that the temp file is removed if the program is exited.  See one of my previous posts on how this is accomplished: Writing a Windows Service with a Cancellation Feature.

Windows Service Cancellation

One of the first things I do when I write a data updating application is identify how I can recover if the application crashes in the middle.  Can I cleanup any partial downloads?  Sometimes you can set a flag in a record to indicate that the data has completed downloading.  Sometimes there is a count of how many records are expected (verify this with the actual number present).  Sometimes you can verify with a date/time stamp on the data to download vs. what is in the database.  The point is, your program will probably crash and it will not crash at a convenient point where all the data being downloaded is completed and closed up nice and neat.  The best place to perform a cleanup is right at the beginning when your program starts up.

After you get your crash-proof data accuracy working, you will want to make sure that your service application can cancel at any point.  You’ll need to make sure you check the cancellation token inside every long processing loop.  If you are inside a Parallel.Foreach or some such loop, you’ll need to perform a loopState.Break(), not a return or break.  Make sure you test stopping your service application at various points in your program.  Some points might take some time (like a large database operation).  Most service stop requests should be instantaneous.  Getting this right will help you to stop your process clean when you are testing after you deploy your system to a production environment.


Testing a New System

Normally a company that provides an API will have test connections available.  However, this might not be enough because the test data is typically very small.  If you already have accounts and you are allowed to test with real data (for example, the data you are working with is not classified or restricted), then you can setup a test database with a connection to actual data.  If you are not allowed to access real data with a test system, you might need to create a test system.  

In most of these instances I will dummy out the connection part and put text files in a directory with data in each text file representing what would be received from an API.  Then I can test the remainder of my program without an API connection.  A more thorough test setup would involve setting up an API on your local IIS server.  You can write a quick and dirty C# API to read data from a directory and spit it out when it is requested.  Then you can test your application with data that you can generate yourself.

Make sure you test large data sets.  This is something that has bit me more than once.  Normally you can get a pattern for your data and just write a program to generate megabyte sized files.  Then test your program to see if it can handle large data sets and record the timing difference between smaller data sets and larger ones.  This can be used to give an estimate of how long it will take your program to download data when it is deployed (assuming you know the files sizes of the production data).


Summary

I created this blog post to give the uninitiated a sense of what they are in for when writing a service to handle large data downloads.  This post hardly touches on the actual problems that a person writing such an application will run into.  At least these are some of the most common issues and solutions that I have learned over the years.



 

Bulk Data Inserts

Summary

In this blog post I’m going to demonstrate how to do a bulk data insert using ADO.  I’ll be adding code to my helper set of tools so that you can use the ADO database code at my GitHub account to write clean ADO database methods.

Bulk Inserts

The C# SqlClient library contains a method called SqlBulkCopy, which is described in this blog post: Bulk Insert into SQL Server using SqlBulkCopy in C# 

The first thing I’m going to do is add a static helper object to contain all of this code:

DataColumn dateColumn = new DataColumn();
dateColumn.DataType = Type.GetType(“System.DateTime“);
dateColumn.ColumnName = “SaleDate”;
prodSalesData.Columns.Add(dateColumn);

I wanted to contain this in one helper method in order to avoid a lot of repetition.  So I created this object/method to handle it:

public static class DataSetHelpers
{
    public static void AddDataColumn(this DataTable dataTable, 

       string fieldName, string dataType)
    {
        DataColumn dataColumn = new DataColumn();
        dataColumn.DataType = Type.GetType(dataType);
        dataColumn.ColumnName = fieldName;
        dataTable.Columns.Add(dataColumn);
    }
}

Inside my ADODatabaseContext object, I added this method:

public void BulkInsert(DataTable dataTable)
{
    using (SqlBulkCopy s = new SqlBulkCopy(_db))
    {
        s.DestinationTableName = dataTable.TableName;
        s.BulkCopyTimeout = CommandTimeOut;

        foreach (var column in dataTable.Columns)
        {
            s.ColumnMappings.Add(column.ToString(), 
               column.ToString());
        }

        s.WriteToServer(dataTable);
    }
}

Now the bulk insert consists of two parts: The DataTable definition and the DataSet populate.  You can create a custom method in your code to populate each row with one method call.  This would be specific to the table you are populating, I’m going to leave that part expanded.  Here’s the object I used to bulk insert into a table named “Store”:

public class CreateStoreRecords
{
    private DataTable StoreData;


    private void SetupDataTable()
    {
        StoreData = new DataTable(“Store“);

        StoreData.AddDataColumn(“Name“, “System.String“);
        StoreData.AddDataColumn(“Address“, “System.String“);
        StoreData.AddDataColumn(“City“, “System.String“);
        StoreData.AddDataColumn(“State“, “System.String“);
        StoreData.AddDataColumn(“Zip“, “System.String“);
    }

    public void InsertRecords()
    {
        SetupDataTable();

        using (var db = new ADODatabaseContext(“Server=sqlserver;
          Integrated Security=True“))
        {
            DataRow StoreList = StoreData.NewRow();
            StoreList[“Name“] = “Toys R Us“;
            StoreList[“Address“] = “123 Main St“;
            StoreList[“City“] = “Chicago“;
            StoreList[“State“] = “IL“;
            StoreList[“Zip“] = “12345“;
            StoreData.Rows.Add(StoreList);

            StoreList = StoreData.NewRow();
            StoreList[“Name“] = “Target“;
            StoreList[“Address“] = “5th Ave“;
            StoreList[“City“] = “New York“;
            StoreList[“State“] = “NY“;
            StoreList[“Zip“] = “33333“;
            StoreData.Rows.Add(StoreList);

            db.BulkInsert(StoreData);
        }
    }
}

If you want to create the store table in MS SQL Server to match my sample code, then you can use this code in a query window to generate it:

CREATE TABLE [dbo].[Store](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [Name] [varchar](50) NULL,
    [Address] [varchar](50) NULL,
    [City] [varchar](50) NULL,
    [State] [varchar](50) NULL,
    [Zip] [varchar](50) NULL,
 CONSTRAINT [PK_Store] PRIMARY KEY CLUSTERED
(
    [id] ASC
)WITH (PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON
[PRIMARY]
) ON [PRIMARY]

You’ll have to replace the connection string to match your server and access rights (assuming you’re not using integrated security).

Where to Get The Code

You can download the sample code from above at my GitHub account by clicking here.