Get ASP.Net Core Web API Up and Running Quickly

Summary

I’m going to show you how to setup your environment so you can get results from an API using ASP.Net Core quickly.  I’ll discuss ways to troubleshoot issues and get logging and troubleshooting tools working quick.

ASP.Net Core Web API

Web API has been around for quite some time but there are a lot of changes that were made for .Net Core applications.  If you’re new to the world of developing APIs, you’ll want to get your troubleshooting tools up quickly.  As a seasoned API designer I usually focus on getting my tools and logging up and working first.  I know that I’m going to need these tools to troubleshoot and there is nothing worse than trying to install a logging system after writing a ton of code.

First, create a .Net API application using Visual Studio 2015 Community edition.  You can follow these steps:

Create a new .Net Core Web Application Project:

Next, you’ll see a screen where you can select the web application project type (select Web API):

A template project will be generated and you’ll have one Controller called ValuesController.  This is a sample REST interface that you can model other controllers from.  You’ll want to setup Visual Studio so you can run the project and use break-points.  You’ll have to change your IIS Express setting in the drop-down in your menu bar:

Select the name of the project that is below IIS Express (as shown in yellow above).  This will be the same as the name of your project when you created it.

Your next task is to create a consumer that will connect to your API, send data and receive results.  So you can create a standard .Net Console application.  This does not need to be fancy.  It’s just a throw-away application that you’ll use for testing purposes only.  You can use the same application to test your installed API just by changing the URL parameter.  Here’s how you do it:

Create a Console application:

Give it a name and hit the OK button.

Download this C# source file by clicking here.  You can create a cs file in your console application and paste this object into it (download my GitHub example by clicking here).  This web client is not necessary, you can use the plain web client object, but this client can handle cookies.  Just in case you decide you need to pass a cookie for one reason or another.

Next, you can setup a url at the top of your Program.cs source:

private static string url = "http://localhost:5000";

The default URL address is always this address, including the port number (the port does not rotate), unless you override it in the settings.  To change this information you can go into the project properties of your API project and select the Debug tab and change it.

Back to the Console application…

Create a static method for your first API consumer.  Name it GetValues to match the method you’ll call:

private static object GetValues()
{
	using (var webClient = new CookieAwareWebClient())
	{
		webClient.Headers["Accept-Encoding"] = "UTF-8";
		webClient.Headers["Content-Type"] = "application/json";

		var arr = webClient.DownloadData(url + "/api/values");
		return Encoding.ASCII.GetString(arr);
	}
}

Next, add a Console.Writeline() command and a Console.ReadKey() to your main:

static void Main(string[] args)
{
	Console.WriteLine(GetValues());

	Console.ReadKey();
}

Now switch to your API project and hit F-5.  When the blank window appears, then switch back to your consumer console application and hit F-5.  You should see something like this:

If all of this is working, you’re off to a good start.  You can put break-points into your API code and troubleshoot inputs and outputs.  You can write your remaining consumer methods to test each API that you wrote.  In this instance, there are a total of 5 APIs that you can connect to.

Logging

Your next task is to install some logging.  Why do you need logging?  Somewhere down the line you’re going to want to install this API on a production system.  Your system should not contain Visual Studio or any other tools that can be used by hackers or drain your resources when you don’t need them.  Logging is going to be your eyes on what is happening with your API.  No matter how much testing you perform on your PC, you’re not going to get a fully loaded API and there are going to be requests that are going to hit your API that you don’t expect.

Nicholas Blumhardt has an excellent article on adding a file logger to .Net Core.  Click here to read it.  You can follow his steps to insert your log code.  I changed the directory, but used the same code in the Configure method:

loggerFactory.AddFile("c:/logs/myapp-{Date}.txt");

I just ran the API project and a log file appeared:

This is easier than NLog (and NLog is easy).

Before you go live, you’ll probably want to tweak the limits of the logging so you don’t fill up your hard drive on a production machine.  One bot could make for a bad day.

Swashbuckle Swagger

The next thing you’re going to need is a help interface.  This interface is not just for help, it will give interface information to developers who wish to consume your APIs.  It can also be useful for troubleshooting when your system goes live.  Go to this website and follow the instructions on how to install and use Swagger.  Once you have it installed you’ll need to perform a publish to use the help.  Right-click on the project and select “Publish”.  Click on “Custom” and then give your publish profile a name.  Then click the “Publish” button.

Create an IIS website (open IIS, add a new website):

The Physical Path will link to your project directory in the bin/Release/PublishOutput folder.  You’ll need to make sure that your project has IUSR and IIS_IUSRS permissions (right-click on your project directory, select the security tab.  Then add full rights for IUSR and do the same for IIS_IUSRS).

You’ll need to add the url to your hosts file (c:\Windows\System32\drivers\etc folder)

127.0.0.1 MyDotNetWebApi.com

Next, you’ll need to adjust your application pool .Net Framework to “No Managed Code”.  Go back to IIS and select “Application Pools”:

Now if you point your browser to the URL that you created (MyDotNetWebApi.com in this example), then you might get this:

Epic fail!

OK, it’s not that bad.  Here’s how to troubleshoot this type of error.

Navigate to your PublishOutput folder and scroll all the way to the bottom.  Now edit the web.config file.  Change your stdoutLogFile to “c:\logs\stdout”

Refresh your browser to make it trigger the error again.  Then go to your c:\logs directory and check out the error log.  If you followed the instructions on installing Swagger like I did, you might have missed the fact that this line of code:

var pathToDoc = Configuration["Swagger:Path"];

Requires an entry in the appsettings.json file:

"Swagger": {
  "Path": "DotNetWebApi.xml"
}

Now go to your URL and add the following path:

www.yoururl.com/swagger/ui

Next, you might want to change the default path.  You can set the path to another path like “help”.  Just change this line of code:

app.UseSwaggerUi("help");

Now you can type in the following URL to see your API help page:

www.yoururl.com/help

To gain full use of Swagger, you’ll need to comment your APIs.  Just type three slashes and a summary comment block will appear.  This information is used by Swagger to form descriptions in the help interface.  Here’s an example of commented API code and the results:

Update NuGet Packages

.Net Core allows you to paste NuGet package information directly into the project.json file.  This is convenient because you don’t have to use the package manager to search for packages.  However, the versions of each package are being updated at a rapid rate, so even for the project template packages there are updates.  You can startup your Manage NuGet Packages window and click on the “Updates” tab.  Then update everything.

The downside of upgrading everything at once is that you’ll probably break something.  So be prepared to do some troubleshooting.  When I upgraded my sample code for this blog post I ran into a target framework runtime error.

Other Considerations

Before you deploy an API, be sure to understand what you need as a minimum requirement.  If your API is used by your own software and you expect to use some sort of security or authentication to keep out unwanted users, don’t deploy before you have added the security code to your API.  It’s always easier to test without using security, but this step is very important.

Also, you might want to provide an on/off setting to disable the API functions in your production environment for customers until you have fully tested your deployment.  Such a feature can be used in a canary release, where you allow some customers to use the new feature for a few days before releasing to all of your customers.  This will give you time to estimate load capabilities of your servers.

I also didn’t discuss IOC container usage, unit testing, database access, where to store your configuration files, etc.  Be sure and set a standard before you go live.

One last thing to consider is the deployment of an API.  You should create an empty API container and check it into your version control system.  Then create a deployment package to be able to deploy to each of your environments (Development, QA, stage, production, etc.).  The sooner you get your continuous integration working, the less work it will be to get your project completed and tested.  Manual deployment, even for a test system takes a lot of time.  Human error being the number one killer of deployment efficiency.

Where to Get the Code

As always, you can download the sample code at my GitHub account by clicking here (for the api code) and here (for the console consumer code).  Please hit the “Like” button at the end of this article if this subject was helpful!

 

Web APIs with CORS

Summary

I’ve done a lot of .Net Web APIs.  APIs are the future of web programming.  APIs allow you to break your system into smaller systems to give you flexibility and most importantly scalability.  It can also be used to break an application into front-end and back-end systems giving you the flexibility to write multiple front-ends for one back-end.  Most commonly this is used in a situation where your web application supports browsers and mobile device applications.

Web API

I’m going to create a very simple API to support one GET Method type of controller.  My purpose is to show how to add Cross Origin Resource Sharing CORS support and how to connect all the pieces together.  I’ll be using a straight HTML web page with a JQuery page to perform the AJAX command.  I’ll also use JSON for the protocol.  I will not be covering JSONP in this article.  My final purpose in writing this article is to demonstrate how to troubleshoot problems with APIs and what tools you can use.

I’m using Visual Studio 2015 Community edition.  The free version.  This should all work on version 2012 and beyond, though I’ve had difficulty with 2012 and CORS in the past (specifically with conflicts with Newtonsoft JSON).

You’ll need to create a new Web API application.  Create an empty application and select “Web API” in the check box.  

Then add a new controller and select “Web API 2 Controller – Empty”.

Now you’ll need two NuGet packages and you can copy these two lines and paste them into your “Package Manager Console” window and execute them directly:

Install-Package Newtonsoft.Json
Install-Package Microsoft.AspNet.WebApi.Cors

For my API Controller, I named it “HomeController” which means that the path will be:

myweburl/api/Home/methodname

How do I know that?  It’s in the WebApiConfig.cs file.  Which can be found inside the App_Start directory.  Here’s what is default:

config.Routes.MapHttpRoute(
    name: “DefaultApi“,
    routeTemplate: “api/{controller}/{id}“,
    defaults: new { id = RouteParameter.Optional }
);

The word “api” is in all path names to your Web API applications, but you can change that to any word you want.  If you had two different sets of APIs, you can use two routes with different patterns.  I’m not going to get any deeper here.  I just wanted to mention that the “routeTemplate” will control the url pattern that you will need in order to connect to your API.

If you create an HTML web page and drop it inside the same URL as your API, it’ll work.  However, what I’m going to do is run my HTML file from my desktop and I’m going to make up a URL for my API.  This will require CORS support, otherwise the API will not respond to any requests.

At this point, the CORS support is installed from the above NuGet package.  All we need is to add the following using to the WebApiConfig.cs file:

using System.Web.Http.Cors;

Then add the following code to the top of the “Register” method:

var cors = new EnableCorsAttribute(“*“, “*“, “*“);
config.EnableCors(cors);


I’m demonstrating support for all origins, headers and methods.  However, you should narrow this down after you have completed your APIs and are going to deploy your application to a production system.  This will prevent hackers from accessing your APIs.

Next, is the code for the controller that you created earlier:

using System.Net;
using System.Net.Http;
using System.Web.Http;
using WebApiCorsDemo.Models;
using Newtonsoft.Json;
using System.Text;

namespace WebApiCorsDemo.Controllers
{
    public class HomeController : ApiController
    {
        [HttpGet]
        public HttpResponseMessage MyMessage()
        {
            var result = new MessageResults
            {
                Message = “It worked!
            };

            var jsonData = JsonConvert.SerializeObject(result);
            var resp = new HttpResponseMessage(HttpStatusCode.OK);
            resp.Content = new StringContent(jsonData, Encoding.UTF8, “application/json“);
            return resp;
        }
    }
}
 
You can see that I serialized the MessageResults object into a JSON message and returned it in the response content with a type of application/json.  I always use a serializer to create my JSON if possible.  You can generate the same output using a string and just building the JSON manually.  It works and it’s really easy on something this tiny.  However, I would discourage this practice because it becomes a programming nightmare when a program grows in size and complexity.  Once you become familiar with APIs and start to build a full-scale application, you’ll be returning large complex data types and it is so easy to miss a “{” bracket and spend hours trying to fix something that you should not be wasting time on.

The code for the MessageResults class is in the Models folder called MessageResults.cs:

public class MessageResults
{
    public string Message { get; set; }
}

Now we’ll need a JQuery file that will call this API, and then we’ll need to setup IIS.

For the HTML file, I created a Home.html file and populated it with this:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <meta charset=”utf-8” />
    <script src=”jquery-2.1.4.min.js“></script>
    <script src=”Home.js“></script>
</head>
<body>
    Loading…
</body>
</html>

You’ll need to download JQuery, I used version 2.1.4 in this example, but I would recommend going to the JQuery website and download the latest version and just change the script url above to reflect the version of JQuery that you’re using.  You can also see that I named my js file “Home.js” to match my “Home.html” file.  Inside my js file is this:

$(document).ready(function () {
    GetMessage();
});

function GetMessage() {
    var url = “http://www.franksmessageapi.com/api/Home/MyMessage“;

    $.ajax({
        crossDomain: true,
        type: “GET“,
        url: url,
        dataType: ‘json‘,
        contentType: ‘application/json‘,
        success: function (data, textStatus, jqXHR) {
            alert(data.Message);
        },
        error: function (jqXHR, textStatus, errorThrown) {
            alert(formatErrorMessage(jqXHR, textStatus));
        }
    });
}

There is an additional “formatErrorMessage()” function that is not shown above, you can copy that from the full code I posted on GitHub, or just remove it from your error return.  I use this function for troubleshooting AJAX calls.  At this point, if you typed in all the code from above, you won’t get any results.  Primarily because you don’t have a URL named “www.franksmessageapi.com” and it doesn’t exist on the internet (unless someone goes out and claims it).  You have to setup your IIS with a dummy URL for testing purposes.

So open the IIS control panel, right-click on “Sites” and “Add Website”:


For test sites, I always name my website the exact same URL that I’m going to bind to it.  That makes it easy to find the correct website.  Especially if I have 50 test sites setup.  You’ll need to point the physical path to the root path of your project, not solution.  This will be the subdirectory that contains the web.config file.

Next, you’ll need to make sure that your web project directory has permissions for IIS to access.  Once you create the website you can click on the website node and on the right side are a bunch of links to do “stuff”.  You’ll see one link named “Edit Permissions”, click on it.  Then click on the “Security” tab of the small window that popped up.  Make sure the following users have full permissions:

IUSR
IIS_IUSRS (yourpcnameIIS_IUSRS)

If both do not exist, then add them and give them full rights.  Close your IIS window.

One more step before your application will work.  You’ll need to redirect the URL name to your localhost so that IIS will listen for HTTP requests.

Open your hosts file located in C:WindowsSystem32driversetchosts.  This is a text file and you can add as many entries into this file that you would like.  At the bottom of the hosts file, I added this line:

127.0.0.1        www.franksmessageapi.com

You can use the same name, or make up your own URL.  Try not to use a URL that exists on the web or you will find that you cannot get to the real address anymore.  The hosts file will override DNS and reroute your request to 127.0.0.1 which is your own PC.

Now, let’s do some incremental testing to make sure each piece of the puzzle is working.  First, let’s make sure the hosts table is working correctly.  Open up a command window.  You might have to run as administrator if you are using Windows 10.  You can type “CMD” in the run box and start the window up.  Then execute the following command:

ping www.franksmessageapi.com

You should get the following:


If you don’t get a response back, then you might need to reboot your PC, or clear your DNS cache.  Start with the DNS cache by typing in this command:

ipconfig /flushdns

Try to ping again.  If it doesn’t work, reboot and then try again.  After that, you’ll need to select a different URL name to get it to work.  Beyond that, it’s time to google.  Don’t go any further until you get this problem fixed.

This is a GET method, so let’s open a browser and go directly to the path that we think our API is located.  Before we do that, Rebuild the API application and make sure it builds without errors.  Then open the js file and copy the URL that we’ll call and paste it into the browser URL.  You should see this:


If you get an error of any type, you can use a tool called Fiddler to analyze what is happening.  Download and install Fiddler.  You might need to change Firefox’s configuration for handling proxies (Firefox will block Fiddler, as if we needed another problem to troubleshoot).  For the version of Firefox as of this writing (42.0), go to the Options, Advanced, Network, then click the “Settings” button to the right of the Connection section.  Select “Use system proxy settings”.

OK, now you should be able to refresh the browser with your test URL in it and see something pop up in your Fiddler screen.  Obviously, if you have a 404 error, you’ll see it long before you notice it on Fiddler (it should report 404 on the web page). This just means your URL is wrong.

If you get a “No HTTP resource was found that matches the request URI” message in your browser, you might have your controller named wrong in the URL.  This is a 404 sent back from the program that it couldn’t route correctly.  This error will also return something like “No type was found that matches the controller named [Home2]” where “Home2” was in the URL, but your controller is named “HomeController” (which means your URL should use “Home”).

Time to test CORS.  In your test browser setup, CORS will not refuse the connection.  That’s because you are requesting your API from the website that the API is hosted on.  However, we want to run this from an HTML page that might be hosted someplace else.  In our test we will run it from the desktop.  So navigate to where you created “Home.html” and double-click on that page.  If CORS is not working you’ll get an error.  You’ll need Fiddler to figure this out.  In Fiddler you’ll see a 405 error.  If you go to the bottom right window (this represents the response), you can switch to “raw” and see a message like this:

HTTP/1.1 405 Method Not Allowed
Cache-Control: no-cache
Pragma: no-cache
Allow: GET
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:53:34 GMT
Content-Length: 96

<Error><Message>The requested resource does not support http method ‘OPTIONS’.</Message></Error>

The first request from a cross origin request is the OPTIONS request.  This occurs before the GET.  The purpose of the OPTIONS is to determine if the end point will accept a request from your browser.  For the example code, if the CORS section is working inside the WebApiConfig.cs file, then you’ll see two requests in Fiddler, one OPTIONS request followed by a GET request.  Here’s the OPTIONS response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:58:23 GMT
Content-Length: 0


And the raw GET response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 24
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 01:10:59 GMT

{“Message”:”It worked!”}

If you switch your response to JSON for the GET response, you should see something like this:


One more thing to notice.  If you open a browser and paste the URL into it and then change the name of MyMessage action, you’ll notice that it still performs a GET operation from the controller, returning the “It worked!” message.  If you create two or more GET methods in the same controller one action will become the default action for all GET operations, no matter which action you specify.  Modify your route inside your WebApiConfig.cs file.  Add an “{action}” to the route like this:

config.Routes.MapHttpRoute(
    name: “DefaultApi“,
    routeTemplate: “api/{controller}/{action}/{id}“,
    defaults: new { id = RouteParameter.Optional }
);


Now you should see an error in your browser if the the action name in your URL does not exist in your controller:


Finally, you can create two or more GET actions and they will be distinguished by the name of the action in the URL.  Add the following action to your controller inside “HomeController.cs”:

[HttpGet]
public HttpResponseMessage MyMessageTest()
{
    string result = “This is the second controller“;

    var jsonData = JsonConvert.SerializeObject(result);
    var resp = new HttpResponseMessage(HttpStatusCode.OK);
    resp.Content = new StringContent(jsonData, Encoding.UTF8, “application/json“);
    return resp;
}

Rebuild, and test from your browser directly.  First use the URL containing “MyMessage”:

Then try MyMessagetest:

Notice how the MyMessageTest action returns a JSON string and the MyMessage returns a JSON message object.



Where to Find the Source Code

You can download the full Visual Studio source code at my GitHub account by clicking here



 

Loading Data From an API Quickly

Summary

One of my areas of expertise is in mass data transfers using API’s, FTP, WebDAV or SOAP connections.  In this post I’m going to talk about techniques that can be used to increase the throughput of accessing data from a Web API and putting it in a database.  I’m also going to discuss issues that occur and how to get around them.


Accessing Data From the Web

When data is read from a web-based API, it is typically limited to small bite-sized chunks of data.  These chunks can be in the mega-byte range, but typically a web connection has a time limit.  In many applications the data is divided by customer or account number or some other type of data division.  This requires your application to access the web API many times to get the full set of data or all the updates.

Let’s say for instance, you want to create a windows service that connects to a Web API and downloads data for 1000 customer accounts.  This process will be performed hourly to keep the customer data up to date (I’m making this example up, but it’s similar to some of my experience).  If you were to create a program that loops through and access the API for each account your timing would look something like this:


As you can see there is a significant amount of “wasted” time waiting for the connection to occur.  If you are downloading your data into a temp file and then inserting it into a database, you also have the time taken to save to a temp file before database insertion starts.

The best way to increase the speed of your download is to apply multiple connections in parallel.  This will cause your timing to overlap so there are some connections that are establishing and some that are downloading.  In the case of using a temp file as I just mentioned, you can also get an overlap of threads that are saving to temp files while one or more is inserting into a database. The timing resembles something like this:


If your process was performing nothing but inserts into a database, you would not gain much by parallelizing the process.  This is due to the fact that your database can only insert data so fast and that would be your bottleneck.  With an API connection, the bottleneck could be your network bandwidth, the speed of the remote web server, your local server running the service or the insertion speed of your database.  Parallelizing the process will fix your problem with the connection wait time and that may be the only time you can take advantage of (assuming your network bandwidth is maxed when you are downloading data).  To find out what your optimum speed is, you’ll have to run a few tests to see how fast you can download the same data using varying numbers of threads.


Here’s what you’ll typically see:

If you do a quick and dirty Excel chart, it should look like this:

 As you can see the speed will plateau at some point and you’ll need to make a decision on how many parallel processes are enough.  Typically, the choice of how many parallel processes will be decided by how much trouble you’re having with getting the data reliably.  Most of the time, if this involves a database, it’ll be the frequency of deadlocks that you’ll run into.

Decoupling the Tasks

Another possible solution you can use to possibly increase the throughput of your system is to decouple the process of downloading the data into temp files and importing it into your database.  You would write two windows services to perform separate tasks.  The first service would connect to the web servers using parallel processes and create a directory of temp files that would be used as a buffer.  You would need some sort of cut-off point, total megabytes used or maybe number of files downloaded.  Your service would do nothing more than read the data and create temp files.  The second service would read any temp files, insert the data into the database and delete the temp files.  You can multi-thread this application or maybe you need to keep it sequential to avoid database contention.  This service would check the temp file directory at a set interval and process all files until completed when it detects data to import. 


Database Deadlocks

Once you begin to use parallel processes, you’re bound to run into deadlocks.  I typically try to increase the number of processes until I run into lots of deadlocks, then I attempt to fix the deadlock problem, followed by backing down the number of parallel processes.  Deadlocks are difficult to troubleshoot because they are an asynchronous or difficult to repeat reliably. 

To reduce deadlocks you’ll have to be aware of how your data is being changed in your database.  Are you performing only inserts?  If there are deletes, then you’ll need to limit the number of deletes per transaction.  SQL Server typically escalates deletes to a table lock if you are attempting to delete more than 500 records at a time: How to resolve blocking problems that are caused by lock escalation in SQL Server.

Here is another good resource of deadlock resolution: INF: Analyzing and Avoiding Deadlocks in SQL Server

If your deadlocks are occurring when performing updates, you might want to index the table to use rowlocks.  Adding a rowlock hint to your queries might help, but it is only a “hint” and does not force your queries to follow your desired locking plan.  Don’t assume that a table locking hint will solve the issue.  Also, the nolock hint does not apply to inserts or deletes.

One last method to get around persistent deadlocks, though I normally only add this as a safety net, not to “solve” a deadlock problem: You can create code to retry a failed operation.  If you perform a retry, you should use a back-off escalation technique.  What I mean by this, is that the first error should cause your thread to wait one second and retry, then if it errors again, wait two seconds, then the next error would wait four seconds and so on.  You’ll also need to account for the possibility of a complete failure and stop retrying.

Communication Failure

One other aspect of this program that you’ll have to anticipate is some sort of communication failure.  The remote web servers might be down, your internet connection might fail, or possibly your database is off-line.  In each of these situations, you’ll probably need a contingency plan.  Typically, I reset the schedule and exit.  You can get sophisticated and check to see if all your threads have failed, and then exit and set your scheduled update to an hour from now, or a day from now.  This will cause your system to stop retrying over and over, wasting resources when the remote web servers might be down for the weekend.  

Logging

This is one aspect of software that seems to be overlooked a lot.  You should probably log all kinds of data points when creating a new program.  Then remove your debug logging after your software has been deployed for a  month or two.  You’re going to want to keep an eye on your logs to make sure the program continues to run.  Some problems that can occur include things like memory leaks or crashes that cause your service to stop completely.  You’ll want to log the exception that occurs so you can fix it. Your temp files might also fill up the hard drive of your server (watch out for this).  Make sure any temp files that you create are properly removed when they are no longer needed.  

When I use temp files on a system, I typically program the temp file to be created and removed by the same object.  This object is typically called with a “using” clause and I incorporate an IDiposable pattern to make sure that the temp file is removed if the program is exited.  See one of my previous posts on how this is accomplished: Writing a Windows Service with a Cancellation Feature.

Windows Service Cancellation

One of the first things I do when I write a data updating application is identify how I can recover if the application crashes in the middle.  Can I cleanup any partial downloads?  Sometimes you can set a flag in a record to indicate that the data has completed downloading.  Sometimes there is a count of how many records are expected (verify this with the actual number present).  Sometimes you can verify with a date/time stamp on the data to download vs. what is in the database.  The point is, your program will probably crash and it will not crash at a convenient point where all the data being downloaded is completed and closed up nice and neat.  The best place to perform a cleanup is right at the beginning when your program starts up.

After you get your crash-proof data accuracy working, you will want to make sure that your service application can cancel at any point.  You’ll need to make sure you check the cancellation token inside every long processing loop.  If you are inside a Parallel.Foreach or some such loop, you’ll need to perform a loopState.Break(), not a return or break.  Make sure you test stopping your service application at various points in your program.  Some points might take some time (like a large database operation).  Most service stop requests should be instantaneous.  Getting this right will help you to stop your process clean when you are testing after you deploy your system to a production environment.


Testing a New System

Normally a company that provides an API will have test connections available.  However, this might not be enough because the test data is typically very small.  If you already have accounts and you are allowed to test with real data (for example, the data you are working with is not classified or restricted), then you can setup a test database with a connection to actual data.  If you are not allowed to access real data with a test system, you might need to create a test system.  

In most of these instances I will dummy out the connection part and put text files in a directory with data in each text file representing what would be received from an API.  Then I can test the remainder of my program without an API connection.  A more thorough test setup would involve setting up an API on your local IIS server.  You can write a quick and dirty C# API to read data from a directory and spit it out when it is requested.  Then you can test your application with data that you can generate yourself.

Make sure you test large data sets.  This is something that has bit me more than once.  Normally you can get a pattern for your data and just write a program to generate megabyte sized files.  Then test your program to see if it can handle large data sets and record the timing difference between smaller data sets and larger ones.  This can be used to give an estimate of how long it will take your program to download data when it is deployed (assuming you know the files sizes of the production data).


Summary

I created this blog post to give the uninitiated a sense of what they are in for when writing a service to handle large data downloads.  This post hardly touches on the actual problems that a person writing such an application will run into.  At least these are some of the most common issues and solutions that I have learned over the years.



 

Legacy Code Writers

Summary

The purpose of this blog post is to describe how legacy code gets perpetuated years beyond its useful life and how to put a stop to it.  I will also make my case for why this process needs to be stopped.  I will attempt to target this article to managers as well as the developers who are continuing the creation of legacy code.  I would like to make a disclaimer up front that my intent is not to insult anybody.  My intent is to educate people.  To get them out of their shell and think about newer technologies and why the new technologies were invented. 

My History

First, I’m going to give a bit of my own history as a developer so there is some context to this blog post.  

I have been a developer since 1977 or 78 (too long to remember the exact year I wrote my first basic program).  I learned Basic.  Line numbered Basic.  I join the Navy in 1982 and I was formally educated on how to repair minicomputers, specifically the UYK-20 and the SNAP-II.  In those days you troubleshoot down to the circuit level (and sometimes replace a chip).  While I was in the Navy, the Apple Macintosh was introduced and I bought one because it fit in the electronics storage cabinet in the transmitter room on the ship (which I had a key to).  I programmed with Microsoft Basic and I wanted to write a game or two.  My first game was a battleship game that had graphical capabilities (and use of the mouse, etc. etc.).  It didn’t take long before the line numbers became a serious problem and I finally gave in and decided to look at other languages.  I was very familiar with Basic syntax, so switching was like speaking a foreign language.  It was going to slow me down.

I stopped at the computer store (that really was “the day”), and I saw Mac Pascal in a box on the shelf and the back of the box had some sample code.  It looked similar to Basic and I bought it.  I got really good at Pascal.  Line numbers were a thing of the past.  In fact I used Pascal until I was almost out of college.  At that time the University of Michigan was teaching students to program using Pascal (specifically Borland Pascal).  Object oriented programming was just starting to enter the scene and several instructors actually taught OOP concepts such as encapsulation and polymorphism.  This was between 1988 and 1994.

The reason I used Pascal for so long was due to the fact that the Macintosh built-in functions used Pascal headers.  The reason I abandoned Pascal was due to the fact the the World Wide Web was invented around that time and everything unixish was in C.  I liked C and my first C programs were written in Borland C.  


Fast Forward…

OK, I’m now going to fast-forward to the late 90’s and early 2000’s when OOP programming really became main stream and frameworks, unit testing, etc. all became available.  When the web became something that businesses used there were only a hand-full of tools available.  There was C, html (with javascript), Java, PHP and Microsoft’s product called ASP (and a hand-full of oddballs that no longer exist).  If you wanted to develop a dynamic interactive website application, and you were running Microsoft Windows Server products, you had to perform the deed in ASP.  I avoided this path by using PHP on a Linux machine, but I got lucky.  I was in charge of the department and I made the final decision on what technology will be used and how the product would be developed.  Don’t get me wrong, there is a lot of ASP code that is in use and it is stable and operational.  Unfortunately, ASP is one of the most difficult legacy code to convert into something modern.

What’s my Beef with Legacy Programmers?

If your development knowledge ended with ASP and/or VB without learning and using a unit testing framework, the MVC framework (or equivalent), ORMs, Test Driven Development, SOLID principles, then you probably are oblivious to how much easier it is to program within a modern environment.  This situation happens because programmers focus on solving a problem with the tools they have in their tool box.  If a programer doesn’t spend the time to learn new tools, then they will always apply the same set of tools to the problem.  These are the programmers that I am calling Legacy Programmers. 

Legacy Programmers, who am I talking about?

First, let’s describe the difference between self-taught and college educated developers.  I get a lot of angry responses about developers who have a degree and can’t program.  There are a lot of them.  This does not mean that the degree is the problem and it also should not lead one to believe that a developer without a degree will be guaranteed to be better than a degree carrying developer.  Here’s a Vin diagram to demonstrate the pool of developers available:


The developers that we seek to create successful software is the intersection of the degree/non-degree programmers.  This diagram is not intended to indicate that there are more or less of either developer in the intersection called solid developers.  In my experience, there are more college degree carrying developers in this range due to the fact that most solid developers will be wise enough to realize that they need to get the piece of paper that states that they have a minimum level of competence.  It’s unfortunate that colleges are churning out so many really bad developers, but to not obtain the degree usually indicates that the individual is not motivated to expand their knowledge (there are exceptions).

OK, now for a better Vin diagram of the world of developers (non-unix developers):


In the world of Microsoft language developers there are primarily VB and C# developers.  Some of these developers only know VB (and VB Script) as indicated by the large blue area.  I believe these individuals outnumber the total C# programmers judging by the amount of legacy code I’ve encountered over the years, but I could be wrong on this assumption.  The number of C# programmers are in red and the number of individuals who know C# and not VB are small.  That’s due to the fact that C# programmers don’t typically come from an environment where C# is their first language.  In the VB circle, people who learned VB and not C# are normally self-taught (colleges don’t typically teach VB).  Most of the developers that know VB and C# come from the C# side and learn VB, or if they are like me, they were self-taught before they obtained a degree and ended up with knowledge of both languages.

The legacy programmers I’m talking about in this blog post fall into the blue area and do not know C#.


Where am I Going With This?

OK, let’s cut to the chase.  In my review of legacy code involving VB.Net and VB Script (AKA Classic ASP) I have discovered that developers who built the code do not understand OOP patterns, SOLID principles, Test Driven Development, MVC, etc.  Most of the code in the legacy category fit the type of code I used to write in the early 90’s before I discovered how to modularize software using OOP patterns.  I forced myself to learn the proper way to break a program into objects.  I forced myself to develop software using TDD methods.  I forced myself to learn MVC (and I regret not learning it when it first came out).  I did this because these techniques solved a lot of development issues.  These techniques help to contain bugs, enhance debugging capabilities, reduce transient errors and make it easier to enhance without breaking existing features (using unit tests to perform regression testing).  If you have no idea what I’m talking about, or maybe you’ve heard the term and you have never actually used these techniques in your daily programming tasks, you’re in trouble.  Your career is coming to an end unless you learn now.

Let’s talk about some of these techniques and why they are so important.  First, you need to understand Object Oriented Programming.  The basics of this pattern is that an object is built around the data that you are working on (I’m not talking about database data, I’m talking about a small atomic data item, like an address or personnel information or maybe a checking account).  The data is contained inside the object and then methods are built to act on this data.  The object itself knows all about the data that is acted on and external objects that use this object do not need to understand nuances of the data (like how to dispose of allocated resources or how to keep a list properly ordered).  This allows the developer that creates the object to hide details, debug the methods that act on the data and not have to worry about another object corrupting the data or not using it correctly.  It also makes the software modular.

On a grander scale is a framework called MVC (Model View Controller).  This is not the only framework available, but it is the most common web development framework in Microsoft Visual Studio.  What this framework does is give a clean separation between the C# (or VB) code and the web view code (which is typically written in HTML, JQuery and possibly Razor).  ASP mixes all the business logic in with the view code and there are no controllers.  In MVC, the controllers will wire-up the business logic with the view code.  Typically the controller will communicate with an AJAX call that gives the web-based interface a smooth look.  The primary reason for breaking code in this fashion is to be able to put the business logic in a test harness and wrap unit tests around each feature that your program performs.

Unit testing is very important.  It takes a lot of practice to perform Test Driven Development (TDD) and it’s easier to develop your code first and then create unit tests, until you learn the nuances of unit testing, object mocking and dependency injection.  Once you have learned about mocking and dependency injection, you’ll realize that it is more efficient to create the unit tests first, then write your code to pass the test.  After your code is complete, each feature should be matched up with a set of unit tests so that any future changes can be made with the confidence that you (or any other developer) will not break previously defined features.  Major refactoring can be done in code designed this way because any major change that breaks the code will show up in the failure of one or more unit tests.

ORMs (Object Relational Mapping) are becoming the technique to use for querying data from a database.  An ORM with LINQ is a cleaner way to access a database than ADO or a DataSet.  One aspect of an ORM that makes it powerful is the fact that a query written in LINQ can use the context sensitive editor functions of Visual Studio to avoid syntax errors.  The result set is contained in a object with properties that produces code that is easier to read.

APIs (Application Programming Interface) and SOA (Service Oriented Architecture) are the new techniques.  These are not just buzzwords that sound cool.  These were invented to solve an issue that legacy code has: You are stuck with the language you developed your entire application around.  By using Web APIs to separate your view with your business logic, you can reuse your business logic for multiple interfaces.  Like mobile applications, custom mini-applications, mash-ups with 3rd party software, etc.  The MVC framework is already setup to organize your software in this fashion.  To make the complete separation, you can create two MVC projects, one containing the view components and one containing the model and controller logic.  Then your HTML and JQuery code can access your controllers in the same way they would if they were in the same project (using Web API).  However, different developers can work on different parts of the project.  A company can assign developers to define and develop the APIs to provide specific data.  Then developers/graphic artists can develop the view logic independently.  Once the software is written, other views can be designed to connect to the APIs that have been developed, such as reports or mobile.  Other APIs can be designed using other languages including unix languages running on a unix (or Linux) machine.  Like Python or Ruby.  The view can still communicate to the API because the common language will be either JSON or XML.

Another aspect of legacy code that is making enhancements difficult is the use of tightly coupled code.  There is a principle called SOLID.  This is not the only principle around, but it is a very good one.  By learning and applying SOLID to any software development project, you can avoid the problems of tightly coupled code, procedures or methods that perform more than one task, untestable code, etc.

The last issue is the use of VB itself.  I have seen debates of VB vs. C#, and VB has all the features of C#, etc. etc.  Unfortunately, VB is not Microsoft’s flagship language, it’s C#.  This is made obvious by the fact that many of C# Visual Studio functions are finally going to come to the VB world in Visual Studio 2015.  The other issue with VB is that it is really a legacy language with baggage left over from the 1980’s.  VB was adapted to be object oriented not designed to be an object oriented language.  C# on the other hand is only an OOP language.  If you’re searching for code on the internet there is a lot more MVC and Web API code in C# than in VB.  This trend is going to continue and VB will become the “Fortran” of the developer world.  Don’t say I didn’t warn ya!


Conclusion

If you are developing software and are not familiar with the techniques I’ve described so far, you need to get educated fast.  I have kept up with the technology because I’m a full-blooded nerd and I love to solve development issues.  I evolved my knowledge because I was frustrated with producing code that contained a lot of bugs and was difficult to enhance later on.  I learned each of these techniques over time and have applied them with a lot of success.  If I learn a new technique and it doesn’t solve my issue, I will abandon it quickly.  However, I have had a lot of success with the techniques that I’ve described in this blog post.  You don’t need to take on all of these concepts at once, but start with C# and OOP.  Then work your way up to unit testing, TDD and then SOLID.

 

Returning XML or JSON from a Web API

Summary

In my last blog post I demonstrated how to setup a Web API to request data in JSON format.  Now I’m going to show how to setup your API so that a request can be made to ask for XML as well as JSON return data.  To keep this simple, I’m going to refactor the code from the previous blog post to create a new retriever method that will set the Accept parameter to xml instead of json.

Changes to the Retriever

I copied the retriever method from my previous code and created a retriever that asks for XML data:

public void XMLRetriever()
{
    var xmlSerializer = new XmlSerializer(typeof(ApiResponse));

    var apiRequest = new ApiRequest
    {
        StoreId = 1,
        ProductId = { 2, 3, 4 }
    };

    var request = (HttpWebRequest)WebRequest.Create(
        apiURLLocation + “”);
    request.ContentType = “application/json; charset=utf-8“;
    request.Accept = “text/xml; charset=utf-8“;
    request.Method = “POST“;
    request.Headers.Add(HttpRequestHeader.Authorization, 

            apiAuthorization);
    request.UserAgent = “ApiRequest“;

    //Writes the ApiRequest Json object to request
    using (var streamWriter = new  
           StreamWriter(request.GetRequestStream()))
    {
        streamWriter.Write(JsonConvert.SerializeObject(apiRequest));
        streamWriter.Flush();
    }

    var httpResponse = (HttpWebResponse)request.GetResponse();

    // receives xml data and deserializes it.
    using (var streamreader = new  
           StreamReader(httpResponse.GetResponseStream()))
    {
        var storeInventory = 

            (ApiResponse)xmlSerializer.Deserialize(streamreader);
    }
}

There are two major changes in this code: First, I changed the “Accept” to ask for xml.  Second, I recoded the return data to deserialize it as xml instead of json.  I left the api request to be in JSON.


Changes to the API Application

I altered the API controller to detect which encoding is being requested.  If Accept contains the string “json” then the data is serialized using json.  If Accept contains the string “xml” then the data is serialized using xml.  Otherwise, an error is returned.

Here is the new code for the API controller:

var encoding = ControllerContext.Request.Headers.Accept.ToString();
if (encoding.IndexOf(“json“,  

    StringComparison.OrdinalIgnoreCase) > -1)
{
    // convert the data into json
    var jsonData = JsonConvert.SerializeObject(apiResponse);

    var resp = new HttpResponseMessage();
    resp.Content = new StringContent(jsonData, Encoding.UTF8, 
        “application/json“);
    return resp;
}
else if (encoding.IndexOf(“xml“,  

         StringComparison.OrdinalIgnoreCase) > -1)
{
    // convert the data into xml
    var xmlSerializer = new XmlSerializer(typeof(ApiResponse));

    using (StringWriter writer = new StringWriter())
    {
        xmlSerializer.Serialize(writer, apiResponse);

        var resp = new HttpResponseMessage();
        resp.Content = new StringContent(writer.ToString(), 
             Encoding.UTF8, “application/xml“);
        return resp;
    }
}
else
{
    return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

           “Only JSON and XML formats accepted“);
}


Compile the API application and then startup fiddler.  Then run the retriever.  In fiddler, you should see something like this (you need to change your bottom right sub-tab to xml):




Download the Source


You can go to my GitHub account and download the source here: https://github.com/fdecaire/WebApiDemoJsonXml

 

 

Web API and API Data Retreiver

Summary

In this blog post I’m going to show how to create a Web API in Visual Studio.  Then I’m going to show how to setup IIS 7 to run the API on your PC (or a server).  I’m also going to create a retriever and show how to connect to the Web API and read data.  I’ll be using JSON instead of XML so I’ll also show what tricks you’ll need to know in order to implement your interface correctly.  Finally, I’ll demonstrate how to troubleshoot your API using fiddler.


This is a very long post.  I had toyed with the idea of breaking this into multiple parts, but this subject turned out to be too difficult to break-up in a clean manner.  If you are having trouble getting this to work properly or you think I missed something leave a message in the comments and I’ll correct or add to this article to make it more robust.

Web API Retriever

I’m going to build the retriever first and show how this can be tested without an API.  Then I’ll cover the API and how to incorporate IIS 7 into the whole process.  I’ll be designing the API to use the POST method.  The reason I want to use a POST instead of GET is that I want to be able to pass a lot of variables to request information from the API.  My demo will be a simulation of a chain of stores that consolidate their inventory data into a central location.  Headquarters or an on-line store application (i.e. website) can send a request to this API to find out what inventory a particular store has on hand.  

The retriever will be a simple console application that will use an object to represent the request data.  This object will be serialized into a JSON packet of information posted to the API.  The request object will look like this:

public class ApiRequest
{
  public int StoreId { get; set; }
  public List<int> ProductId { get; set; }
}


This same object will be used in the API to de-serialize the JSON data.  We can put the store id in this packet as well as a list of product ids.  The data received back from the API will be a list of inventory records using the following two objects:

public class InventoryRecord
{
  public int ProductId { get; set; }
  public string Name { get; set; }
  public int Quantity { get; set; }
}


public class ApiResponse
{
  public List<InventoryRecord> Records = new  

         List<InventoryRecord>();
}

As you can see, we will receive one record per product.  Each record will contain the product id, the name and the quantity at that store.  I’m going to dummy out the data in the API to keep this whole project as simple as possible.  Keep in mind, that normally this information will be queried from a large database of inventory.  Here’s the entire retriever:

public class WebApiRetriever
{
  private readonly string apiURLLocation =  

             ConfigurationManager.AppSettings[“ApiURLLocation“];
  private readonly string apiAuthorization =  

          ConfigurationManager.AppSettings[“ApiCredential“];

  public void Retreiver()
  {
    var serializer = new JsonSerializer();

    var apiRequest = new ApiRequest
    {
      StoreId = 1,
      ProductId = { 2, 3, 4 }
    };

    var request = 
        (HttpWebRequest)WebRequest.Create(apiURLLocation + “”);
    request.ContentType = “application/json; charset=utf-8“;
    request.Accept = “application/json“;
    request.Method = “POST“;
    request.Headers.Add(HttpRequestHeader.Authorization, 

         apiAuthorization);
    request.UserAgent = “ApiRequest“;

    //Writes the ApiRequest Json object to request
    using (var streamWriter = new 
           StreamWriter(request.GetRequestStream()))
    {             

      streamWriter.Write(
           JsonConvert.SerializeObject(apiRequest));
      streamWriter.Flush();
    }

    var httpResponse = (HttpWebResponse)request.GetResponse();

    using (var streamreader = new  
           StreamReader(httpResponse.GetResponseStream()))
    using (var reader = new JsonTextReader(streamreader))
    {
      var storeInventory = 

          serializer.Deserialize<ApiResponse>(reader);
    }
  }
}


Some of the code shown is optional. I put the URL location and credentials into variables that are stored in the app.config file.  You can add this to your app.config file:


<appSettings>
    <add key=”ApiURLLocationvalue=” 

       http://www.franksdomain.com/WebApiDemo/api/MyApi/“/>
    <add key=”ApiCredentialvalue=”ABCD“/>
</appSettings>

The URL will need to be changed to match the URL that you setup on your IIS server (later in this blog post).  For now you can setup a redirect in your “hosts” file to match the domain in the app setting shown above.

Navigate to C:WindowsSystem32driversetc and edit the “hosts” file with a text editor.  You’ll see some sample text showing the format of a URL.  Create a domain name on a new line like this:

127.0.0.1        www.franksdomain.com

You can make up your own URL and you can use a URL that is real (technically, franksdomain.com is a real URL and it’s not mine).  If you use a real URL your computer will no longer be able to access that URL on the internet, it will redirect that URL to your IIS server (so be aware of this problem and try to avoid using real URLs).  The IP address 127.0.0.1 is a pointer to your local machine.  So we’re telling your browser to override www.franksdomain.com and redirect the request to the local machine.

Now you should be able to test up to the request.GetResponse() line of code.  That’s where the retriever will bomb.  Before we do this, we need to download and install Fiddler (assuming you don’t already have fiddler installed).  Click here to download fiddler and install it.  Now start up fiddler and you’ll see something like this:


Now run the retriever application until it bombs.  Fiddler will have one line in the left pane that is in red.  Click on it.  In the right pane, click on the “Inspectors” tab and then click on “JSON” sub-tab.  You should see something like this:


In the right side top pane, you’ll see your JSON data.  If your data is shown as a tree-view control then it is formatted correctly as JSON and not just text.  If you serialized your object incorrectly, you will normally see an empty box.  Notice that the store is set to “1” and there are three product ids being requested.

The Web API

The web API will be an MVC 4 application with one ApiController.  The API controller will use a POST method. 

So let’s start with a class that defines what information can be posted to this API.  This is the exact same class used in the retriever:

public class ApiRequest
{
  public int StoreId { get; set; }
  public List<int> ProductId { get; set; }
}


Make sure this class is not decorated with the [Serializable] attribute.  We’re going to use a [FromBody] attribute on the API and the object variables will not bind if this object is setup as serializable (I discovered this fact the hard way).  As you can see by the list definition we can pass a long list of product ids for one store at a time.  We expect to receive a list of inventory back from the API.


The response back to the calling application will be a list of inventory records containing the product id (which will be the same number we passed in the request), the name of the product and the quantity.  These are also the same objects used in the retriever:


public class InventoryRecord
{
  public int ProductId { get; set; }
  public string Name { get; set; }
  public int Quantity { get; set; }
}


public class ApiResponse
{
  public List<InventoryRecord> Records = new  

         List<InventoryRecord>();
}



The entire API controller in the MVC application looks like this:


public class MyApiController : ApiController
{
  [HttpPost]
  [ActionName(“GetInventory“)]
  public HttpResponseMessage GetInventory([FromBody] ApiRequest request)
  {
    if (request == null)
    {
      return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

             “Request was null“);
    }
       
    // check authentication
    var auth = ControllerContext.Request.Headers.Authorization;
       
    // simple demonstration of user rights checking.
    if (auth.Scheme != “ABCD“)
    {
      return Request.CreateErrorResponse(HttpStatusCode.BadRequest, 

             “Invalid Credentials“);
    }
   
    ApiResponse apiResponse = new ApiResponse();

    // read data from a database
    apiResponse.Records = DummyDataRetriever.ReadData(request.ProductId);

    // convert the data into json
    var jsonData = JsonConvert.SerializeObject(apiResponse);

    var resp = new HttpResponseMessage();
    resp.Content = new StringContent(jsonData, Encoding.UTF8, 
                   “application/json“);
    return resp;
  }
}


The controller is a post method controller that looks for an ApiRequest JSON object in the body of the posted information.  The first thing we want to check for is a null request.  That seems to occur most often when a bot crawls a website and hits an API.  If we’re lucky, then bots will not find their way in, but I always code for the worse case situation.  The next part will check the header for the authorization.  I didn’t cover this in the retriever, but I stuffed a string of letters in the authorization variable of the header.  This was setup to be “ABCD” but in a real application you’ll need to perform a database call to a table containing GUIDs.  These GUIDs can be assigned to another party to gain access to your API.  In this example the shopping website application will have its own GUID and each store will have a GUID that can be setup to restrict what each retriever can access.  For instance, the website GUID might have full access to every store to lookup information using this API, but a store might only have access to their own information, etc.  I’m only showing the basics of this method in this article.  I’ll cover this subject more thoroughly in a future blog post.

Next in the code is the dummy lookup for the information requested.  If you navigate to my sample dummy data retriever you’ll see that I just check to see which product is requested and stuff a record in the list.  Obviously, this is the place where you’ll code a database select and insert records into the list from the database.

Next, the list of inventory records are serialized into a JSON format and then attached to the content of the response message.  This is then returned.

Next, you’ll need to setup an IIS server to serve your API.

 

Setting up the IIS Server

I’m going to setup an application in IIS 7 to show how to get this API working from your PC.  Eventually, you’ll want to setup an IIS server on your destination server that you will be deploying this application to.

I’ll be using IIS 7 for this demo, if you’re using Windows 7 or earlier, you will probably need to download and install IIS 7 (the express version works too).  When you open IIS 7 you should see a treeview on the left side of the console:


Right-click on “Default Web Site” and “Add Application“.  Name it “WebApiDemo“.  You should now see this:


Now click on the “WebApiDemo” node and you’ll see a panel on the right side of the control panel named “actions“.  Click on the “Basic Settings” link.  Change the physical path to point to your project location for the API project (this is the MVC4 project that you created earlier or downloaded from my Github account).



You’ll notice that the application pool is set to “DefaultAppPool“.  We’ll need to change this to a .Net 4 pool.  So click on the “Application Pools” node and double-click on the “DefaultAppPool” line.  Change the .Net Framework Version to version 4:


At this point, if you try to access your API, you’ll discover that it doesn’t work.  That’s because we need to give IIS server access to your project directory.  So navigate to your project directory, right-click and go to properties, then click on the “Security” tab.  Click the “advanced” button and “Change Permissions” button.  Then click the “Add” button.  The first user you’ll need to add will be the IIS_IUSRS permission.  Click the “Check Names” button and it should add your machine name as the domain of this user (my machine name is “FRANK-PC”, yours will be different):


You’ll need to give IIS permissions.  I was able to make the API work with these permissions:

I would recommend doing more research before setting up your final API server.  I’m not going to cover this subject in this post.

Now click “OK”, “OK”, “OK”, etc.

Now run through those steps again: to add a user named IUSR, with the same permissions:




If you didn’t setup your URL in the hosts file, you’ll need to do that now. If you did this in the retriever section above, then you can skip this step.  

Navigate to C:WindowsSystem32driversetc and edit the “hosts” file with a text editor.  You’ll see some sample text showing the format of a URL.  Create a domain name on a new line like this:

127.0.0.1        www.franksdomain.com

Remember, you can makeup your own domain name.

Now, let’s test the site.  First, we need to determine what the URL will be when we access our API.  The easy method is to go into the IIS control panel and click on the “Browse urlname.com on *80 (http)” link:

Now you can copy the URL in the browser that popped up:

http://www.franksdomain.com/WebApiDemo


This URL is actually the URL to the MVC website in your application.  In order to access your API, your going to have to add to this URL:

http://www.franksdomain.com/WebApiDemo/api/MyApi

How did I get the “api/MyApi“?  If you go to your MVC application and navigate to the “App_Start/WebApiConfig.cs” file, you’ll see this default setup:

config.Routes.MapHttpRoute(
        name: “DefaultApi“,
        routeTemplate: “api/{controller}/{id}“,
        defaults: new { id = RouteParameter.Optional }
);



So the URL contains “api” and then the controller name.  Remember the previous code for the API controller:

public class MyApiController : ApiController

Ignore the “Controller” part of the class name and you’ll have the rest of your path.  Put this path string in the app.config file of your retriever application and compile both applications.  

Startup fiddler.  Run your retriever.  Your retriever should run through without any errors (unless you or I missed a step).  In fiddler, select JSON for the top right and bottom right panes. You should see something like this:

Notice that the retriever sent a request for product ids 2,3,4 and the API returned detailed product information for those three products (Spoon, Fork and Knife).  You might need to expand the tree-views in your JSON panels to see all the information like that shown in the screen-shot above.


Setting a Breakpoint in the API Application


Your API application doesn’t really run until it is accessed by IIS.  So we need to run the retriever once and then we can set our debugger to attach to the application process.  Once you run your retriever, go to your API application, in Visual Studio click on the “Debug” menu, then select “attach to process“.  Then make sure the “Show processes from all users” is checked at the bottom of the dialog box.  Then find the process named “w3wp.exe” and click on it. 



Click the “attach” button.  Then click the “attach” button in the pop-up dialog.  You’ll notice that your application is now “running” (you can tell by the red stop button in your tool bar):


Put a break-point in your code.  Run the retriever program and you see your API project pop-up with your break-point waiting for you.  Now you can step through your program and inspect variables (such as the request variables that were received, hint, hint) just like you ran this program using the F-5 key.



Summary

There are a lot of little things that must occur correctly for this API to work properly.  Here are a few things to look out for:

– Your firewall could also block requests, so be on the lookout for that problem.  
– If you don’t have the correct URL setup in your retriever, you will not get a connection.  
– The objects you use to serialize and de-serialize JSON data must match between your retriever and API applications.  
– The IIS server must have the correct rights to the API application directory.  
– You need to setup the correct .NET type in your IIS application pool.
– Verify that IIS is running on your PC.
– Verify your inputs on your API by attaching to the w3wp.exe process and breaking on the first line of code.
– Verify your output from your retriever using fiddler.

Getting the Sample Projects

You can download the two sample projects from my GitHub account:  
https://github.com/fdecaire/WebApiDemo