Entity Framework Unit Testing with SQLLocalDB

Summary

I’ve published a few blog posts on the usage of SQLLocalDB with NHibernate.  Now I’m going to demonstrate how easy it is to use with EF.  In fact, SQLLocalDB can be used with ADO queries and LINQ-To-SQL.  If you’re dealing with legacy code and your methods use a lot of database access operations, it would be beneficial to get your code in a test harness before converting to your target ORM.


Modifying the EF Context

The first thing I’m going to do is create a helper to deal with the Entity Framework context.  If you generate a new Entity Framework database, you’ll get a context that is configured using the App.config file.  This is OK, if your application is always going to use the same database.  If you need to switch databases, and in this case, we’ll need to use a different database when under unit testing, then you’ll need to control the context parameters.  Here’s the helper class that I wrote to handle multiple data sources:


using System.Data.Entity.Core.EntityClient;
using System.Data.SqlClient;

namespace Helpers
{
    public static class EFContextHelper
    {
        public static string ConnectionString(string connectionName, string databaseName, string modelName, string userName, string password)
        {
            bool integratedSecurity = (userName == “”);

            if (UnitTestHelpers.IsInUnitTest)
            {
                connectionName = “(localdb)\” + 
                    UnitTestHelpers.InstanceName;
                integratedSecurity = true;
            }

            return new EntityConnectionStringBuilder
            {
                Metadata = “res://*/” + modelName + “.csdl|res://*/” + modelName + “.ssdl|res://*/” + modelName + “.msl“,
                Provider = “System.Data.SqlClient“,
                ProviderConnectionString = new  
                     SqlConnectionStringBuilder
                {
                    MultipleActiveResultSets = true,
                    InitialCatalog = databaseName,
                    DataSource = connectionName,
                    IntegratedSecurity =
integratedSecurity,
                    UserID = userName,
                    Password = password
                }.ConnectionString
            }.ConnectionString;
        }
    }
}


The helper class builds the connection string and replaces the one that is defined in the App.config file.  There is a check to see if the calling assembly is a unit test assembly, I will use this to override the server name and point to the SQLLocalDB that is defined in the UnitTestHelpers.InstanceName.  This will ensure that if an EF context is called inside a method of your program, it will connect to the unit test database if a unit test calls your program.

Once you have created the helpers class. you’ll need to use it inside your context code.  To do that, you’ll need to add the code into the Context.tt file (it should be named something like “Model1.Context.tt”).  This file (also known as a T4 file), is used to generate the Model1.Context.cs file.  Modify the code to look something like this:

#>
using System;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using Helpers;
<#
if (container.FunctionImports.Any())
{
#>
using System.Data.Objects;
using System.Data.Objects.DataClasses;
using System.Linq;
<#
}
#>

<#=Accessibility.ForType(container)#> partial class <#=code.Escape(container)#> : DbContext
{
    public <#=code.Escape(container)#>()
        : base(EFContextHelper.ConnectionString(“SQLSERVERNAME”, “sampledata”, “Model1”, “”, “”))
    {

First, you’ll need to add the “using Helpers;” line, then you’ll need to add in the EFContextHelper.ConnectionString method as the base class initialization parameter.
Change the SQLSERVERNAME to match your actual MS SQL Server name.  Change sampledata to match your actual database name.  Change “Model1” to match the name of the edmx file.  Finally, you can use a user name and password if you’re not using integrated security on MS SQL Server.  In my example, I’m using integrated security. 
Once you change the tt file, save it and check the matching cs file to make sure there are no syntax errors.


Using the Mapping Generator

Currently the mapping generator is used to generate NHibernate database mappings.  If you don’t want the extra code, you can download my project and strip out all the Fluent Nhibernate code (or just leave it for now).  I have added a new section that will generate code to create tables that match the tables defined in your database.  But first you must run the generator to create the table definitions.

Go to my GitHub account and download the NHibernateMappingGenerator solution.  You can find it here.  Make sure that the NHibernateMappingGenerator project (which is a Windows Forms project) is set to the startup project.  Then run the program and select your SQL Server and database:


If your server doesn’t use integrated security (i.e. you need a user name and password to access your SQL Server), you’ll have to do some open heart surgery (sorry, I haven’t added that feature to this program yet).  Once you click the “Generate” button, this program  will generate code that can create tables, stored procedures, views and constraints in your SQLLocalDB database instance for unit testing purposes.  You should note that your test database will match your production database from the time that you generated this code.  If you make database changes to your production system, then you’ll need to regenerate the table definitions.


The Unit Test

I’ll demonstrate one simple unit test.  This test isn’t really going to be useful, except it will test the fact that you can insert data into SQLLocalDB and query the data using Entity Framework and it will not corrupt your production data.  Here’s the code:

[TestClass]
public class EntityFrameworkTests
{
  [TestCleanup]
  public void Cleanup()
  {
    UnitTestHelpers.TruncateData();
  }

  [TestMethod]
  public void TestEntityFrameworkContext()
  {
    using (var db = new sampledataEntities())
    {
      Store store = new Store
      {
        Name = “Test Store“,
      };
 
      db.Stores.Add(store);
      db.SaveChanges();

      var resultQuery = (from s in db.Stores 
                         select s).ToList();

      Assert.AreEqual(1, resultQuery.Count());
      Assert.AreEqual(“Test Store“, resultQuery[0].Name);
    }
  }
}

As you can see, using EF from a unit test is just like using it inside your application.  Just open a context, insert data, query data, done.  The magic is occurring inside the context itself.  If you put a break-point inside the EFContextHelper.ConnectionString method, you can see where the connection string is built to point to the SQLLocalDB instance instead of your production database instance (you should always test this to verify that it is functioning correctly).


Generating the Tables

One detail that is buried in this sample code that I haven’t covered yet is how the tables are generated inside the SQLLocalDB instance.  In previous blog posts I always used the Fluent NHibernate database generate feature to generate all the tables of the database in the unit test assembly initialize method.  If you open the AssemblyCommon.cs file (inside the SampleUnitTests project) and look at the ClassStartInitialize method, you’ll see where I call another UnitTestHelpers method called CreateAllTables.  Dig into this method and you’ll see ADO code that creates MS SQL tables for a specified database.  This makes the entire unit test helper project agnostic to the type of database you might want to test.  


Using This Technique for Other Database Types

The unit test helpers can currently handle EF as well as Fluent NHibernate, but technically, any ORM or direct data access can use this set of helpers.  Make sure to create a database connection helper for the database type you will use in your project (or already use in your project).  Then you can apply the same UnitTestHelpers.IsInUnitTest check to set the instance to the SQLLocalDB instance.  Finally, you can add unit tests for each method in your application, no matter which database context it is using.  

As I mentioned in the summary, it would ease your transition to another ORM, assuming that is your goal.  Your first step would be to create unit tests around your current objects and methods, using your current database technique, then convert to your target ORM and make sure the unit tests pass.


Getting the Code

You can download the entire solution at my GitHub account: https://github.com/fdecaire/NHibernateMappingGenerator.




 

GDI+ Graphics: Adjusting Gamma

Summary

I’m going to do something a little different here.  I’m going to start a series of posts about the GDI+ graphics interface (in C#).  If you have followed my blog for a while, you already know that I have a demonstration game I wrote Battle Field One.  The last version I blogged about uses SVG graphics to render the output in a browser.  In a future post I’m going to show how to use GDI+ graphics and replace the web interface with a standard windows forms interface.  So I’ll cover a few GDI+ techniques along the way and then I’ll incorporate these techniques in the game changes coming up.

GDI+

First, I need to explain GDI+.  GDI+ is the new version of the 2 dimensional graphics engine that can be accessed directly from the forms paint event (GDI stands for Graphic Design Interface).  GDI is not fast.  For fast, we would need to use DirectX (I’ll blog about DirectX and Direct 3D later).  Since Battle Field One is a turn-based game, we don’t need fast.  What we need is simple.  That’s why I’m going to use GDI+.

To setup a quick and dirty example, you can create a new Visual Studio project of type “Windows Form Application”.  Go to the code part of your form.  You’ll need to add the following “usings” at the top of your initial form:

using System.Drawing;
using System.Drawing.Imaging;


Now switch back to your form design and switch to the events tab of your properties window:


Scroll down until you find the “Paint” event and double-click on it.  A new event called “Form1_Paint” will be created in your code.  All of your GDI+ code will be entered into this event for now.

Now let’s put some png files in a folder of the project.  Create a new folder named “img”, download and copy this image into that folder:


mountains_01.png

Now put this code inside your Form1_Paint event method:

Image Mountains = Image.FromFile(“../../img/mountains_01.png“);

Graphics g = pe.Graphics;
g.DrawImage(Mountains, 100, 100, Mountains.Width, Mountains.Height);


Now run your program.  You should see a hex shaped image with some mountainous terrain:


Of course, this program is just rendering the png image that is sitting in the img directory.  The reason for the double “../” in the path is because the program that executes is inside the bin/Debug folder.  So the relative path will be two directories back and then into the img directory.  If you don’t specify the path correctly, you’ll get a giant red “X” in your window:


Adjusting the Gamma

One of the capabilities that we’ll need when switching the game to use GDI+ is that we need to darken the hex terrain where the current units cannot see.  In the game the visible cells will be rendered with a gamma that is 1.0, and the non-visible cells will be rendered with a gamma of 3.0 (larger numbers are darker, less than one will be brighter).

Now I want to demonstrate three mountain cells that represent a gamma of 1.0, 0.5 and 2.0.  In order to modify the gamma setting we’ll need to use the ImageAttributes object:

ImageAttributes imageAttributes = new ImageAttributes();
imageAttributes.SetGamma(1.0f, ColorAdjustType.Bitmap);


The DrawImage object can accept the ImageAttributes as a parameter, but we also need to add a few more parameters.  So I’m going to show the code here, and then I’ll discuss it:


Image Mountains = Image.FromFile(“../../img/mountains_01.png”);

Graphics g = pe.Graphics;

ImageAttributes imageAttributes = new ImageAttributes();
 
// normal gamma
imageAttributes.SetGamma(1.0f, ColorAdjustType.Bitmap);
g.DrawImage(Mountains,
        new Rectangle(100, 100, Mountains.Width, Mountains.Height),
        0,
        0,
        Mountains.Width,
        Mountains.Height,
        GraphicsUnit.Pixel,
        imageAttributes);

// lighter
imageAttributes.SetGamma(0.5f, ColorAdjustType.Bitmap);
g.DrawImage(Mountains,
        new Rectangle(200, 100, Mountains.Width, Mountains.Height),
        0,
        0,
        Mountains.Width,
        Mountains.Height,
        GraphicsUnit.Pixel,
        imageAttributes);

// darker
imageAttributes.SetGamma(2.0f, ColorAdjustType.Bitmap);
g.DrawImage(Mountains,
        new Rectangle(300, 100, Mountains.Width, Mountains.Height),
        0,
        0,
        Mountains.Width,
        Mountains.Height,
        GraphicsUnit.Pixel,
        imageAttributes);

This is all the code you’ll need for this demo to work.  When you specify a rectangle for the second parameter, you will need to use the x,y coordinates of that rectangle to determine where the image will be plotted.  Leave the DrawImage x,y set to zero.  Use the image.Width and image.Height if you want to maintain the scaling of the image itself.  Otherwise you can adjust these parameters to blow up or shrink an image.

If you run the sample, you’ll see that there is a normal mountain hex on the left, a lighter hex to the right of it and then a darker hex to the right of that:

Download the Sample Code

You can go here to download the zip file for this demo application: 

GDIPlusGammaDemo.zip




 

Returning XML or JSON from a Web API

Summary

In my last blog post I demonstrated how to setup a Web API to request data in JSON format.  Now I’m going to show how to setup your API so that a request can be made to ask for XML as well as JSON return data.  To keep this simple, I’m going to refactor the code from the previous blog post to create a new retriever method that will set the Accept parameter to xml instead of json.

Changes to the Retriever

I copied the retriever method from my previous code and created a retriever that asks for XML data:

public void XMLRetriever()
{
    var xmlSerializer = new XmlSerializer(typeof(ApiResponse));

    var apiRequest = new ApiRequest
    {
        StoreId = 1,
        ProductId = { 2, 3, 4 }
    };

    var request = (HttpWebRequest)WebRequest.Create(
        apiURLLocation + “”);
    request.ContentType = “application/json; charset=utf-8“;
    request.Accept = “text/xml; charset=utf-8“;
    request.Method = “POST“;
    request.Headers.Add(HttpRequestHeader.Authorization, 

            apiAuthorization);
    request.UserAgent = “ApiRequest“;

    //Writes the ApiRequest Json object to request
    using (var streamWriter = new  
           StreamWriter(request.GetRequestStream()))
    {
        streamWriter.Write(JsonConvert.SerializeObject(apiRequest));
        streamWriter.Flush();
    }

    var httpResponse = (HttpWebResponse)request.GetResponse();

    // receives xml data and deserializes it.
    using (var streamreader = new  
           StreamReader(httpResponse.GetResponseStream()))
    {
        var storeInventory = 

            (ApiResponse)xmlSerializer.Deserialize(streamreader);
    }
}

There are two major changes in this code: First, I changed the “Accept” to ask for xml.  Second, I recoded the return data to deserialize it as xml instead of json.  I left the api request to be in JSON.


Changes to the API Application

I altered the API controller to detect which encoding is being requested.  If Accept contains the string “json” then the data is serialized using json.  If Accept contains the string “xml” then the data is serialized using xml.  Otherwise, an error is returned.

Here is the new code for the API controller:

var encoding = ControllerContext.Request.Headers.Accept.ToString();
if (encoding.IndexOf(“json“,  

    StringComparison.OrdinalIgnoreCase) > -1)
{
    // convert the data into json
    var jsonData = JsonConvert.SerializeObject(apiResponse);

    var resp = new HttpResponseMessage();
    resp.Content = new StringContent(jsonData, Encoding.UTF8, 
        “application/json“);
    return resp;
}
else if (encoding.IndexOf(“xml“,  

         StringComparison.OrdinalIgnoreCase) > -1)
{
    // convert the data into xml
    var xmlSerializer = new XmlSerializer(typeof(ApiResponse));

    using (StringWriter writer = new StringWriter())
    {
        xmlSerializer.Serialize(writer, apiResponse);

        var resp = new HttpResponseMessage();
        resp.Content = new StringContent(writer.ToString(), 
             Encoding.UTF8, “application/xml“);
        return resp;
    }
}
else
{
    return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

           “Only JSON and XML formats accepted“);
}


Compile the API application and then startup fiddler.  Then run the retriever.  In fiddler, you should see something like this (you need to change your bottom right sub-tab to xml):




Download the Source


You can go to my GitHub account and download the source here: https://github.com/fdecaire/WebApiDemoJsonXml

 

 

Web API and API Data Retreiver

Summary

In this blog post I’m going to show how to create a Web API in Visual Studio.  Then I’m going to show how to setup IIS 7 to run the API on your PC (or a server).  I’m also going to create a retriever and show how to connect to the Web API and read data.  I’ll be using JSON instead of XML so I’ll also show what tricks you’ll need to know in order to implement your interface correctly.  Finally, I’ll demonstrate how to troubleshoot your API using fiddler.


This is a very long post.  I had toyed with the idea of breaking this into multiple parts, but this subject turned out to be too difficult to break-up in a clean manner.  If you are having trouble getting this to work properly or you think I missed something leave a message in the comments and I’ll correct or add to this article to make it more robust.

Web API Retriever

I’m going to build the retriever first and show how this can be tested without an API.  Then I’ll cover the API and how to incorporate IIS 7 into the whole process.  I’ll be designing the API to use the POST method.  The reason I want to use a POST instead of GET is that I want to be able to pass a lot of variables to request information from the API.  My demo will be a simulation of a chain of stores that consolidate their inventory data into a central location.  Headquarters or an on-line store application (i.e. website) can send a request to this API to find out what inventory a particular store has on hand.  

The retriever will be a simple console application that will use an object to represent the request data.  This object will be serialized into a JSON packet of information posted to the API.  The request object will look like this:

public class ApiRequest
{
  public int StoreId { get; set; }
  public List<int> ProductId { get; set; }
}


This same object will be used in the API to de-serialize the JSON data.  We can put the store id in this packet as well as a list of product ids.  The data received back from the API will be a list of inventory records using the following two objects:

public class InventoryRecord
{
  public int ProductId { get; set; }
  public string Name { get; set; }
  public int Quantity { get; set; }
}


public class ApiResponse
{
  public List<InventoryRecord> Records = new  

         List<InventoryRecord>();
}

As you can see, we will receive one record per product.  Each record will contain the product id, the name and the quantity at that store.  I’m going to dummy out the data in the API to keep this whole project as simple as possible.  Keep in mind, that normally this information will be queried from a large database of inventory.  Here’s the entire retriever:

public class WebApiRetriever
{
  private readonly string apiURLLocation =  

             ConfigurationManager.AppSettings[“ApiURLLocation“];
  private readonly string apiAuthorization =  

          ConfigurationManager.AppSettings[“ApiCredential“];

  public void Retreiver()
  {
    var serializer = new JsonSerializer();

    var apiRequest = new ApiRequest
    {
      StoreId = 1,
      ProductId = { 2, 3, 4 }
    };

    var request = 
        (HttpWebRequest)WebRequest.Create(apiURLLocation + “”);
    request.ContentType = “application/json; charset=utf-8“;
    request.Accept = “application/json“;
    request.Method = “POST“;
    request.Headers.Add(HttpRequestHeader.Authorization, 

         apiAuthorization);
    request.UserAgent = “ApiRequest“;

    //Writes the ApiRequest Json object to request
    using (var streamWriter = new 
           StreamWriter(request.GetRequestStream()))
    {             

      streamWriter.Write(
           JsonConvert.SerializeObject(apiRequest));
      streamWriter.Flush();
    }

    var httpResponse = (HttpWebResponse)request.GetResponse();

    using (var streamreader = new  
           StreamReader(httpResponse.GetResponseStream()))
    using (var reader = new JsonTextReader(streamreader))
    {
      var storeInventory = 

          serializer.Deserialize<ApiResponse>(reader);
    }
  }
}


Some of the code shown is optional. I put the URL location and credentials into variables that are stored in the app.config file.  You can add this to your app.config file:


<appSettings>
    <add key=”ApiURLLocationvalue=” 

       http://www.franksdomain.com/WebApiDemo/api/MyApi/“/>
    <add key=”ApiCredentialvalue=”ABCD“/>
</appSettings>

The URL will need to be changed to match the URL that you setup on your IIS server (later in this blog post).  For now you can setup a redirect in your “hosts” file to match the domain in the app setting shown above.

Navigate to C:WindowsSystem32driversetc and edit the “hosts” file with a text editor.  You’ll see some sample text showing the format of a URL.  Create a domain name on a new line like this:

127.0.0.1        www.franksdomain.com

You can make up your own URL and you can use a URL that is real (technically, franksdomain.com is a real URL and it’s not mine).  If you use a real URL your computer will no longer be able to access that URL on the internet, it will redirect that URL to your IIS server (so be aware of this problem and try to avoid using real URLs).  The IP address 127.0.0.1 is a pointer to your local machine.  So we’re telling your browser to override www.franksdomain.com and redirect the request to the local machine.

Now you should be able to test up to the request.GetResponse() line of code.  That’s where the retriever will bomb.  Before we do this, we need to download and install Fiddler (assuming you don’t already have fiddler installed).  Click here to download fiddler and install it.  Now start up fiddler and you’ll see something like this:


Now run the retriever application until it bombs.  Fiddler will have one line in the left pane that is in red.  Click on it.  In the right pane, click on the “Inspectors” tab and then click on “JSON” sub-tab.  You should see something like this:


In the right side top pane, you’ll see your JSON data.  If your data is shown as a tree-view control then it is formatted correctly as JSON and not just text.  If you serialized your object incorrectly, you will normally see an empty box.  Notice that the store is set to “1” and there are three product ids being requested.

The Web API

The web API will be an MVC 4 application with one ApiController.  The API controller will use a POST method. 

So let’s start with a class that defines what information can be posted to this API.  This is the exact same class used in the retriever:

public class ApiRequest
{
  public int StoreId { get; set; }
  public List<int> ProductId { get; set; }
}


Make sure this class is not decorated with the [Serializable] attribute.  We’re going to use a [FromBody] attribute on the API and the object variables will not bind if this object is setup as serializable (I discovered this fact the hard way).  As you can see by the list definition we can pass a long list of product ids for one store at a time.  We expect to receive a list of inventory back from the API.


The response back to the calling application will be a list of inventory records containing the product id (which will be the same number we passed in the request), the name of the product and the quantity.  These are also the same objects used in the retriever:


public class InventoryRecord
{
  public int ProductId { get; set; }
  public string Name { get; set; }
  public int Quantity { get; set; }
}


public class ApiResponse
{
  public List<InventoryRecord> Records = new  

         List<InventoryRecord>();
}



The entire API controller in the MVC application looks like this:


public class MyApiController : ApiController
{
  [HttpPost]
  [ActionName(“GetInventory“)]
  public HttpResponseMessage GetInventory([FromBody] ApiRequest request)
  {
    if (request == null)
    {
      return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

             “Request was null“);
    }
       
    // check authentication
    var auth = ControllerContext.Request.Headers.Authorization;
       
    // simple demonstration of user rights checking.
    if (auth.Scheme != “ABCD“)
    {
      return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

             “Invalid Credentials“);
    }
   
    ApiResponse apiResponse = new ApiResponse();

    // read data from a database
    apiResponse.Records = DummyDataRetriever.ReadData(request.ProductId);

    // convert the data into json
    var jsonData = JsonConvert.SerializeObject(apiResponse);

    var resp = new HttpResponseMessage();
    resp.Content = new StringContent(jsonData, Encoding.UTF8, 
                   “application/json“);
    return resp;
  }
}


The controller is a post method controller that looks for an ApiRequest JSON object in the body of the posted information.  The first thing we want to check for is a null request.  That seems to occur most often when a bot crawls a website and hits an API.  If we’re lucky, then bots will not find their way in, but I always code for the worse case situation.  The next part will check the header for the authorization.  I didn’t cover this in the retriever, but I stuffed a string of letters in the authorization variable of the header.  This was setup to be “ABCD” but in a real application you’ll need to perform a database call to a table containing GUIDs.  These GUIDs can be assigned to another party to gain access to your API.  In this example the shopping website application will have its own GUID and each store will have a GUID that can be setup to restrict what each retriever can access.  For instance, the website GUID might have full access to every store to lookup information using this API, but a store might only have access to their own information, etc.  I’m only showing the basics of this method in this article.  I’ll cover this subject more thoroughly in a future blog post.

Next in the code is the dummy lookup for the information requested.  If you navigate to my sample dummy data retriever you’ll see that I just check to see which product is requested and stuff a record in the list.  Obviously, this is the place where you’ll code a database select and insert records into the list from the database.

Next, the list of inventory records are serialized into a JSON format and then attached to the content of the response message.  This is then returned.

Next, you’ll need to setup an IIS server to serve your API.

 

Setting up the IIS Server

I’m going to setup an application in IIS 7 to show how to get this API working from your PC.  Eventually, you’ll want to setup an IIS server on your destination server that you will be deploying this application to.

I’ll be using IIS 7 for this demo, if you’re using Windows 7 or earlier, you will probably need to download and install IIS 7 (the express version works too).  When you open IIS 7 you should see a treeview on the left side of the console:


Right-click on “Default Web Site” and “Add Application“.  Name it “WebApiDemo“.  You should now see this:


Now click on the “WebApiDemo” node and you’ll see a panel on the right side of the control panel named “actions“.  Click on the “Basic Settings” link.  Change the physical path to point to your project location for the API project (this is the MVC4 project that you created earlier or downloaded from my Github account).



You’ll notice that the application pool is set to “DefaultAppPool“.  We’ll need to change this to a .Net 4 pool.  So click on the “Application Pools” node and double-click on the “DefaultAppPool” line.  Change the .Net Framework Version to version 4:


At this point, if you try to access your API, you’ll discover that it doesn’t work.  That’s because we need to give IIS server access to your project directory.  So navigate to your project directory, right-click and go to properties, then click on the “Security” tab.  Click the “advanced” button and “Change Permissions” button.  Then click the “Add” button.  The first user you’ll need to add will be the IIS_IUSRS permission.  Click the “Check Names” button and it should add your machine name as the domain of this user (my machine name is “FRANK-PC”, yours will be different):


You’ll need to give IIS permissions.  I was able to make the API work with these permissions:

I would recommend doing more research before setting up your final API server.  I’m not going to cover this subject in this post.

Now click “OK”, “OK”, “OK”, etc.

Now run through those steps again: to add a user named IUSR, with the same permissions:




If you didn’t setup your URL in the hosts file, you’ll need to do that now. If you did this in the retriever section above, then you can skip this step.  

Navigate to C:WindowsSystem32driversetc and edit the “hosts” file with a text editor.  You’ll see some sample text showing the format of a URL.  Create a domain name on a new line like this:

127.0.0.1        www.franksdomain.com

Remember, you can makeup your own domain name.

Now, let’s test the site.  First, we need to determine what the URL will be when we access our API.  The easy method is to go into the IIS control panel and click on the “Browse urlname.com on *80 (http)” link:

Now you can copy the URL in the browser that popped up:

http://www.franksdomain.com/WebApiDemo


This URL is actually the URL to the MVC website in your application.  In order to access your API, your going to have to add to this URL:

http://www.franksdomain.com/WebApiDemo/api/MyApi

How did I get the “api/MyApi“?  If you go to your MVC application and navigate to the “App_Start/WebApiConfig.cs” file, you’ll see this default setup:

config.Routes.MapHttpRoute(
        name: “DefaultApi“,
        routeTemplate: “api/{controller}/{id}“,
        defaults: new { id = RouteParameter.Optional }
);



So the URL contains “api” and then the controller name.  Remember the previous code for the API controller:

public class MyApiController : ApiController

Ignore the “Controller” part of the class name and you’ll have the rest of your path.  Put this path string in the app.config file of your retriever application and compile both applications.  

Startup fiddler.  Run your retriever.  Your retriever should run through without any errors (unless you or I missed a step).  In fiddler, select JSON for the top right and bottom right panes. You should see something like this:

Notice that the retriever sent a request for product ids 2,3,4 and the API returned detailed product information for those three products (Spoon, Fork and Knife).  You might need to expand the tree-views in your JSON panels to see all the information like that shown in the screen-shot above.


Setting a Breakpoint in the API Application


Your API application doesn’t really run until it is accessed by IIS.  So we need to run the retriever once and then we can set our debugger to attach to the application process.  Once you run your retriever, go to your API application, in Visual Studio click on the “Debug” menu, then select “attach to process“.  Then make sure the “Show processes from all users” is checked at the bottom of the dialog box.  Then find the process named “w3wp.exe” and click on it. 



Click the “attach” button.  Then click the “attach” button in the pop-up dialog.  You’ll notice that your application is now “running” (you can tell by the red stop button in your tool bar):


Put a break-point in your code.  Run the retriever program and you see your API project pop-up with your break-point waiting for you.  Now you can step through your program and inspect variables (such as the request variables that were received, hint, hint) just like you ran this program using the F-5 key.



Summary

There are a lot of little things that must occur correctly for this API to work properly.  Here are a few things to look out for:

– Your firewall could also block requests, so be on the lookout for that problem.  
– If you don’t have the correct URL setup in your retriever, you will not get a connection.  
– The objects you use to serialize and de-serialize JSON data must match between your retriever and API applications.  
– The IIS server must have the correct rights to the API application directory.  
– You need to setup the correct .NET type in your IIS application pool.
– Verify that IIS is running on your PC.
– Verify your inputs on your API by attaching to the w3wp.exe process and breaking on the first line of code.
– Verify your output from your retriever using fiddler.

Getting the Sample Projects

You can download the two sample projects from my GitHub account:  
https://github.com/fdecaire/WebApiDemo 

 

Using Mercurial with BitBucket

Summary

In this blog post I’m going to demonstrate the basics of setting up a repository in Bitbucket and then using Mercurial or rather, Tortoise Hg to push and pull software from the Bitbucket repository.


The Bitbucket Account

You can sign up for a Bitbucket account for free.  I would recommend that everyone who develops software to get familiar with repositories and version control by practicing with something like Bitbucket and Mercurial.  Bitbucket allows an unlimited number of private repositories for you to store your source code.  Once you sign up for an account and activate it, you can log in and see the Dashboard:


Under the “Repositories” menu is a “Create Repository” command that you can use to create a new repo:


Make sure you select Mercurial and you can optionally select the language of your overall project.  Give your repo a name and create it.  Then you’ll see it show up in your dashboard.  In this blog post I created a repo named “Demo Repository” (the Tutorial repo came with the account).

Now you’re ready to make a clone on your desktop so you can push your initial project up to the repository.  Click on the repository link in the dashboard and you’ll see this screen:


Click on the “…” and Clone.  Then you will see a command line that you can copy (by right clicking or just use Ctrl-C).  Then you’ll need to go to your desktop and create a directory to put this repository in.  I created a directory named bitbucket on my E: drive.  Start a command from your start menu by typing “cmd” in the search box of your start menu:


In your command window, navigate to your directory, then paste (you’ll have to right-click and select paste in order to paste it into the command window).  The clone command from bit bucket should show and then you can press the enter key to execute the command.  This will cause Bitbucket to ask for your password.  Enter your password and press enter, then the rest of the changes will be pulled down.  Initially, there are no changes and a connection with your repo is setup but the directory is pretty much empty (there is a .hg directory for your mercurial settings).  Now you can close your command window.


Using Mercurial or Tortoise Hg

First, let’s copy something into your local repo directory so we can push it up to Bitbucket.  For this demo I created a console application with Visual Studio 2012 and named it BitbucketDemoApp.  Create your app inside the repo directory that you created, or copy an existing application into this directory.  This application should reside here permanently.  My console application is just the startup application.  Nothing special.

Next go to this website: http://mercurial.selenic.com/ and click the download button.  Download the application and install it.  Navigate to your bitbucket directory (the one that you created your repo inside of).  Right-click on your repo directory and select “Hg Workbench”.  My repo directory is named “demo-repository”, that’s the name that Bitbucket setup from the repo name inside of the Bitbucket website.


Click on the “default” in the top pane.  You should see a bunch of pink filenames.  Those are files that are new and do not exist in the Bitbucket repository yet.  You can click the check box near the “filter text” section to select all files.  In the right pane (named “Parent:”) you can give a description.  I typed in “initial check-in”.  Click on the “commit” button.

A screen will appear that forces you to identify yourself, just type in the name you used in Bitbucket.  Then a “confirm add” dialog will show up and you can click the “add” button to add your files.  At this point, you have committed your files to your local repository.  They are not on Bitbucket yet.  In order to push your files to Bitbucket, you will use the push button:

Once you press this button, you’ll be asked to confirm.  Click “Yes” and you’ll be asked to enter your Bitbucket password.  After you enter your password your changes will be transmitted up to Bitbucket.  A green bar near the menu bar will show that your push completed successfully.
Now go to Bitbucket and you’ll see that there is one commit under the “recent activity” section:

If you click on the “1 commit” link, you’ll see a screen that shows all the files you have committed and what changes have taken place since your last commit (which is none, since this is the first commit).

Create a Branch

Your default branch should always be the stable running version until you are ready to deploy a new version of software.  The way you control this is to create a branch for each bug-fix and enhancement you make to your software.  You can create hundreds of branches at once.  Sometimes this gets a bit tricky to keep track of what is going on in your software so TrotoiseHg has a visual aid to show you each version committed to the repository and each branch that is currently open.  So, let’s make a tiny change and put it on a branch.

First, I went into my console application and added a line of code:

Then I saved my changes and closed my Visual Studio (sometimes Visual Studio doesn’t save the solution file changes until you close VS, so always close up before committing changes).

Now go back to TortoiseHg (right-click on your repo directory and select “Hg Workbench” unless you left the window open from the last use.  Sometimes you’ll have to click one of the refresh buttons to get your changes to show up in the file list window:

You should see one or two file changes.  The source code you changed and possibly the solution and/or project file.

Now type in a comment.  I typed in “Added Console Write” to my comment.  This is what my window looks like now:


Now you need to create a branch.  Click on the “Branch:default” button (it doesn’t look like a button because it’s flush with the window, just click it). Select “Open a new named branch” and give it a name:

Click the commit button.  You will get a confirm new branch window, click “Create Branch”.  Your changes are now committed to your local repository.  If you are working in a multi-programmer environment, someone else could have committed to the repository since you made your change.  In this instance, you’ll need to pull changes.  So click on the pull button (it’s a good habit to get into):


Enter your password (I’ll show you how to get this annoying screen to go away later in this article).  Then you should get a message that the pull completed.  If there were any changes from other programmers, their branches and versions will appear in your window.  If someone changed something that you also changed, there could be a conflict that you’ll need to work out.  I will not be covering conflicts in this blog post.  This post is just to get you warmed up.


Now push your changes to Bitbucket using the push button like before:


Enter your password, and click yes to “Confirm New Branch”.  Enter password again…sigh.

Now if you go to your Bitbucket account and refresh your browser, you’ll see another “1 commit”.  This is your latest version.  Clicking on that link will show the tiny bit of code that you changed:


Now if you look at your TortoiseHg window, you’ll see 3 dots.  Going from bottom to top, one blue dot is your default branch at revision zero.  That was your initial check-in.  The next dot up is a green dot (assuming TortoiseHg uses the same colors), that is revision 1 and it is under a different branch name (Console-write-enhancement in my example).  Then there is a dot on top that is always under the description “Working Directory”.  This is going to be your next revision when you commit your next change:



Switch Branches

Let’s switch back to your default branch.  In TortoiseHg right-click on the default branch line and select “update” a window will appear, just click the “update” button.  Now you’ll see this:


Now the working directory is on the default branch.  At this point you can make changes to default and check those changes in.  But more importantly, if you go back to Visual Studio, you’ll notice that your Console.Write command is gone.  That’s because that change occurred on the other branch.


Merge Branches

Let’s merge our enhancement back into the default branch and then close the enhancement branch.  This is going to take a few steps:

1. Right-click on the “console-write-enhancement” branch and select merge with local.  A window should pop-up and you can click the “next” button to continue.  Everything should run smooth and you can click the “Next” button from this window.  Then there is a “Commit Now” button that you must click followed by a Finish button in order to complete the operation.

2. Push your merged changes back up to Bitbucket.

You should end up with something like this:


Now you need to switch back to your branch.  You can actually continue to make changes and then merge your changes again at a later time, but we’re going to close this branch.  So right-click on the green dot and select “Update”.  Then click the “update” button on the window that appears.

Now you need to click on the working directory dot at the top and then click the “Branch: Console-write-enhancment” button that doesn’t look like a button.  Select “Close current branch”:

Click “OK”, then click the “commit” button.  Some red text will tell you that the head is closed.  Now click the push button.  Last, you must switch back to your default branch before you attempt any other changes in your source code.  Right-click on default and select “Update”.  If you change a closed branch, it will become un-closed.  Remember this handy little hint, in case someone accidently closes the default branch and you decide you need to open it back up.  Just commit another change and the branch is no longer closed.


Embedding the Password


In TortoiseHg, go to the File menu and select “Settings”There is an “Edit” button in the upper right corner, click that to edit the mercurial.ini file.  Then add this to your file:

[extensions]
mercurial_keyring=

Click “Save” then “OK” and then you can push or pull.  The first time, you’ll be asked to enter your password, after that TortoiseHg will not ask for your password again.


Summary

This blog post is nothing more than an introduction to handling repositories with Mercurial using TortoiseHg and Bitbucket.  You can get pretty far with just this information.  I’ll continue with future blog posts about how to undo mistakes and some examples of daily operations using branching and merging.  But for now, practice with these tools and get comfortable with branching and merging.