Legacy Code Writers

Summary

The purpose of this blog post is to describe how legacy code gets perpetuated years beyond its useful life and how to put a stop to it.  I will also make my case for why this process needs to be stopped.  I will attempt to target this article to managers as well as the developers who are continuing the creation of legacy code.  I would like to make a disclaimer up front that my intent is not to insult anybody.  My intent is to educate people.  To get them out of their shell and think about newer technologies and why the new technologies were invented. 

My History

First, I’m going to give a bit of my own history as a developer so there is some context to this blog post.  

I have been a developer since 1977 or 78 (too long to remember the exact year I wrote my first basic program).  I learned Basic.  Line numbered Basic.  I join the Navy in 1982 and I was formally educated on how to repair minicomputers, specifically the UYK-20 and the SNAP-II.  In those days you troubleshoot down to the circuit level (and sometimes replace a chip).  While I was in the Navy, the Apple Macintosh was introduced and I bought one because it fit in the electronics storage cabinet in the transmitter room on the ship (which I had a key to).  I programmed with Microsoft Basic and I wanted to write a game or two.  My first game was a battleship game that had graphical capabilities (and use of the mouse, etc. etc.).  It didn’t take long before the line numbers became a serious problem and I finally gave in and decided to look at other languages.  I was very familiar with Basic syntax, so switching was like speaking a foreign language.  It was going to slow me down.

I stopped at the computer store (that really was “the day”), and I saw Mac Pascal in a box on the shelf and the back of the box had some sample code.  It looked similar to Basic and I bought it.  I got really good at Pascal.  Line numbers were a thing of the past.  In fact I used Pascal until I was almost out of college.  At that time the University of Michigan was teaching students to program using Pascal (specifically Borland Pascal).  Object oriented programming was just starting to enter the scene and several instructors actually taught OOP concepts such as encapsulation and polymorphism.  This was between 1988 and 1994.

The reason I used Pascal for so long was due to the fact that the Macintosh built-in functions used Pascal headers.  The reason I abandoned Pascal was due to the fact the the World Wide Web was invented around that time and everything unixish was in C.  I liked C and my first C programs were written in Borland C.  


Fast Forward…

OK, I’m now going to fast-forward to the late 90’s and early 2000’s when OOP programming really became main stream and frameworks, unit testing, etc. all became available.  When the web became something that businesses used there were only a hand-full of tools available.  There was C, html (with javascript), Java, PHP and Microsoft’s product called ASP (and a hand-full of oddballs that no longer exist).  If you wanted to develop a dynamic interactive website application, and you were running Microsoft Windows Server products, you had to perform the deed in ASP.  I avoided this path by using PHP on a Linux machine, but I got lucky.  I was in charge of the department and I made the final decision on what technology will be used and how the product would be developed.  Don’t get me wrong, there is a lot of ASP code that is in use and it is stable and operational.  Unfortunately, ASP is one of the most difficult legacy code to convert into something modern.

What’s my Beef with Legacy Programmers?

If your development knowledge ended with ASP and/or VB without learning and using a unit testing framework, the MVC framework (or equivalent), ORMs, Test Driven Development, SOLID principles, then you probably are oblivious to how much easier it is to program within a modern environment.  This situation happens because programmers focus on solving a problem with the tools they have in their tool box.  If a programer doesn’t spend the time to learn new tools, then they will always apply the same set of tools to the problem.  These are the programmers that I am calling Legacy Programmers. 

Legacy Programmers, who am I talking about?

First, let’s describe the difference between self-taught and college educated developers.  I get a lot of angry responses about developers who have a degree and can’t program.  There are a lot of them.  This does not mean that the degree is the problem and it also should not lead one to believe that a developer without a degree will be guaranteed to be better than a degree carrying developer.  Here’s a Vin diagram to demonstrate the pool of developers available:


The developers that we seek to create successful software is the intersection of the degree/non-degree programmers.  This diagram is not intended to indicate that there are more or less of either developer in the intersection called solid developers.  In my experience, there are more college degree carrying developers in this range due to the fact that most solid developers will be wise enough to realize that they need to get the piece of paper that states that they have a minimum level of competence.  It’s unfortunate that colleges are churning out so many really bad developers, but to not obtain the degree usually indicates that the individual is not motivated to expand their knowledge (there are exceptions).

OK, now for a better Vin diagram of the world of developers (non-unix developers):


In the world of Microsoft language developers there are primarily VB and C# developers.  Some of these developers only know VB (and VB Script) as indicated by the large blue area.  I believe these individuals outnumber the total C# programmers judging by the amount of legacy code I’ve encountered over the years, but I could be wrong on this assumption.  The number of C# programmers are in red and the number of individuals who know C# and not VB are small.  That’s due to the fact that C# programmers don’t typically come from an environment where C# is their first language.  In the VB circle, people who learned VB and not C# are normally self-taught (colleges don’t typically teach VB).  Most of the developers that know VB and C# come from the C# side and learn VB, or if they are like me, they were self-taught before they obtained a degree and ended up with knowledge of both languages.

The legacy programmers I’m talking about in this blog post fall into the blue area and do not know C#.


Where am I Going With This?

OK, let’s cut to the chase.  In my review of legacy code involving VB.Net and VB Script (AKA Classic ASP) I have discovered that developers who built the code do not understand OOP patterns, SOLID principles, Test Driven Development, MVC, etc.  Most of the code in the legacy category fit the type of code I used to write in the early 90’s before I discovered how to modularize software using OOP patterns.  I forced myself to learn the proper way to break a program into objects.  I forced myself to develop software using TDD methods.  I forced myself to learn MVC (and I regret not learning it when it first came out).  I did this because these techniques solved a lot of development issues.  These techniques help to contain bugs, enhance debugging capabilities, reduce transient errors and make it easier to enhance without breaking existing features (using unit tests to perform regression testing).  If you have no idea what I’m talking about, or maybe you’ve heard the term and you have never actually used these techniques in your daily programming tasks, you’re in trouble.  Your career is coming to an end unless you learn now.

Let’s talk about some of these techniques and why they are so important.  First, you need to understand Object Oriented Programming.  The basics of this pattern is that an object is built around the data that you are working on (I’m not talking about database data, I’m talking about a small atomic data item, like an address or personnel information or maybe a checking account).  The data is contained inside the object and then methods are built to act on this data.  The object itself knows all about the data that is acted on and external objects that use this object do not need to understand nuances of the data (like how to dispose of allocated resources or how to keep a list properly ordered).  This allows the developer that creates the object to hide details, debug the methods that act on the data and not have to worry about another object corrupting the data or not using it correctly.  It also makes the software modular.

On a grander scale is a framework called MVC (Model View Controller).  This is not the only framework available, but it is the most common web development framework in Microsoft Visual Studio.  What this framework does is give a clean separation between the C# (or VB) code and the web view code (which is typically written in HTML, JQuery and possibly Razor).  ASP mixes all the business logic in with the view code and there are no controllers.  In MVC, the controllers will wire-up the business logic with the view code.  Typically the controller will communicate with an AJAX call that gives the web-based interface a smooth look.  The primary reason for breaking code in this fashion is to be able to put the business logic in a test harness and wrap unit tests around each feature that your program performs.

Unit testing is very important.  It takes a lot of practice to perform Test Driven Development (TDD) and it’s easier to develop your code first and then create unit tests, until you learn the nuances of unit testing, object mocking and dependency injection.  Once you have learned about mocking and dependency injection, you’ll realize that it is more efficient to create the unit tests first, then write your code to pass the test.  After your code is complete, each feature should be matched up with a set of unit tests so that any future changes can be made with the confidence that you (or any other developer) will not break previously defined features.  Major refactoring can be done in code designed this way because any major change that breaks the code will show up in the failure of one or more unit tests.

ORMs (Object Relational Mapping) are becoming the technique to use for querying data from a database.  An ORM with LINQ is a cleaner way to access a database than ADO or a DataSet.  One aspect of an ORM that makes it powerful is the fact that a query written in LINQ can use the context sensitive editor functions of Visual Studio to avoid syntax errors.  The result set is contained in a object with properties that produces code that is easier to read.

APIs (Application Programming Interface) and SOA (Service Oriented Architecture) are the new techniques.  These are not just buzzwords that sound cool.  These were invented to solve an issue that legacy code has: You are stuck with the language you developed your entire application around.  By using Web APIs to separate your view with your business logic, you can reuse your business logic for multiple interfaces.  Like mobile applications, custom mini-applications, mash-ups with 3rd party software, etc.  The MVC framework is already setup to organize your software in this fashion.  To make the complete separation, you can create two MVC projects, one containing the view components and one containing the model and controller logic.  Then your HTML and JQuery code can access your controllers in the same way they would if they were in the same project (using Web API).  However, different developers can work on different parts of the project.  A company can assign developers to define and develop the APIs to provide specific data.  Then developers/graphic artists can develop the view logic independently.  Once the software is written, other views can be designed to connect to the APIs that have been developed, such as reports or mobile.  Other APIs can be designed using other languages including unix languages running on a unix (or Linux) machine.  Like Python or Ruby.  The view can still communicate to the API because the common language will be either JSON or XML.

Another aspect of legacy code that is making enhancements difficult is the use of tightly coupled code.  There is a principle called SOLID.  This is not the only principle around, but it is a very good one.  By learning and applying SOLID to any software development project, you can avoid the problems of tightly coupled code, procedures or methods that perform more than one task, untestable code, etc.

The last issue is the use of VB itself.  I have seen debates of VB vs. C#, and VB has all the features of C#, etc. etc.  Unfortunately, VB is not Microsoft’s flagship language, it’s C#.  This is made obvious by the fact that many of C# Visual Studio functions are finally going to come to the VB world in Visual Studio 2015.  The other issue with VB is that it is really a legacy language with baggage left over from the 1980’s.  VB was adapted to be object oriented not designed to be an object oriented language.  C# on the other hand is only an OOP language.  If you’re searching for code on the internet there is a lot more MVC and Web API code in C# than in VB.  This trend is going to continue and VB will become the “Fortran” of the developer world.  Don’t say I didn’t warn ya!


Conclusion

If you are developing software and are not familiar with the techniques I’ve described so far, you need to get educated fast.  I have kept up with the technology because I’m a full-blooded nerd and I love to solve development issues.  I evolved my knowledge because I was frustrated with producing code that contained a lot of bugs and was difficult to enhance later on.  I learned each of these techniques over time and have applied them with a lot of success.  If I learn a new technique and it doesn’t solve my issue, I will abandon it quickly.  However, I have had a lot of success with the techniques that I’ve described in this blog post.  You don’t need to take on all of these concepts at once, but start with C# and OOP.  Then work your way up to unit testing, TDD and then SOLID.

 

Writing a Windows Service with a Cancellation Feature

Summary

I’ve blogged about how to write a windows service before (click here).  In my previous blog post I did not deep-dive into how to handle a cancel request issued by someone who is clicking on the stop button.  In this article, I’m going to expand on the previous article and show how to setup a cancellation token and how to pass this object to another object that can use the cancellation token to determine if it should quit and perform a clean-up.


Using Temp Files

If your windows service uses temp files it’s best to design your code to handle the creation and deletion of temp files in one object using the IDisposable pattern.  For a detailed discussion on how to do an IDisposable pattern you can click here (Implementing IDisposable and the Dispose Pattern Properly).

If you download the sample code, you can find an object named “CreateTempFile”.  This object is an example of an object that creates a temp file.  In this example, that’s all the object does, and you can use it in that fashion, or you can combine the temp file operation with your write stream operation and make sure you dispose of both objects in the Dispose method.  Here’s the CreateTempFile object:

public class CreateTempFile : IDisposable
{
    public string TempFile = “”;

    public CreateTempFile()
    {
        TempFile = Path.GetTempFileName();
    }

    public void Dispose()
    {
        Dispose(true);
    }

    ~CreateTempFile()
    {
        Dispose(false);
    }

    protected virtual void Dispose(bool disposing)
    {
        if (disposing)
        {
            if (TempFile != “”)
            {
                File.Delete(TempFile);
            }
        }
    }
}

Now you can use this object inside a “using” statement that will guarantee that the temp file will be cleaned up if the program is exited.  Here’s an example of the usage.

using (var dataObject = new CreateTempFile())
{
    // code that uses the temp file here…
}
 


Why am I going on and on about using IDisposable?  One of the problems with writing code is that you have to have a reasonable expectation that your code will be modified some time in the future.  In addition to that fact, your code will, more than likely, be modified by some other software developer.  As the code inside the using statement in the toy example above begins to grow, any future developer might not realize that there was a temp file created and needs to be disposed of.  The developer working on your code might be adding logic to something that requires a “return” statement and they might not delete the temp file.  By using the IDisposable pattern you are guaranteeing that future developers don’t need to worry about deleting the temp file.  No matter what method they use to exit from that using statement, a dispose command will be issued and the temp file will be deleted.


Cancelling a Thread

OK, now it’s time to talk about the use of a cancellation token.  When you execute your code inside a thread, there is the outer thread that is still running and this is the code that will issue a cancel request.  Once a cancel request is issued, it is your thread that is responsible for detecting a cancellation request and performing an orderly exit.  Imagine it like a building full of people working 9 to 5.  Normally, they will enter the building at 9am and they work all day (I’ll assume they all eat in the cafeteria), and then they go home at 5pm.  Now the fire alarm goes off.  When the fire alarm goes off, everybody drops what they are doing and walks to the nearest exit and leaves the building.  The fire alarm is the cancellation token being passed to the thread (workers).

Let’s pretend we don’t know how to handle a cancellation token.  Let’s assume we’re using the code from my previous Windows Service example and we implemented the token using that method.  The method I used in the earlier example works for the example and it will work for any program as long as the thread does not call another object that performs a long operation.  Here’s some sample code:

public class StartingClass
{
    private volatile bool _stopAgent;
    private static StartingClass _instance;

    public static StartingClass Instance
    {
        get 
        { 
            return _instance ?? (_instance = new StartingClass()); 
        }
    }

    private StartingClass()
    {
        _stopAgent = false;
    }

    public bool Stopped { get; private set; }

    public void Start()
    {
        ThreadPool.QueueUserWorkItem(action =>
        {
            while (!_stopAgent)
            {
                // do something here…

                if (!_stopAgent)
                {
                    Thread.Sleep(5 * 1000); // wait 5 seconds
                }
            }

            Stopped = true;

        }, TaskCreationOptions.LongRunning);
    }

    public void Stop()
    {
        _stopAgent = true;
    }
}

In the sample code above, there is a _stopAgent variable that is initialized to false when the object is created.  The main thread creates this object, then it will initiate the thread by calling the “Start()” method.  If the main thread wants to stop the inner thread, then it will use the “Stop()” method, which sets the _stopAgent variable to true and is detected by the long running inner thread (in the while statement).  The while statement will drop out and set the “Stopped” property to true.  The “Stopped” property’s sole purpose is to be read by the master thread to determine if the inner thread has stopped what it is doing.  Then the master thread can exit the program.  Here’s the master thread of the service object:

public partial class Service1 : ServiceBase
{
    public Service1()
    {
        InitializeComponent();
    }

    protected override void OnStart(string[] args)
    {
        StartingClass.Instance.Start();
    }

    protected override void OnStop()
    {
        StartingClass.Instance.Stop();

        while (!StartingClass.Instance.Stopped)
        {
            // wait for service to stop
        }
    }
}

I added the wait for stop code to the “OnStop()” method.  This will allow the inner thread time to complete its operation and exit clean.

There is a weakness with this logic and that weakness reveals itself if you create an object inside your thread and call a long running method inside that object.  The object cannot access the _stopAgent flag and you can’t just pass the flag to the object (because it will only contain the initial false value when it was passed).  You could pass a reference to this value and then it will change when a cancel request is made, but only within the method it is passed to.  There is an easier way…


Using a Cancellation Token

If you download the sample code that goes along with this blog post (see link at bottom), you’ll notice that I have refactored the windows service to use an object called CancellationTokenSource.  The new StartingClass object looks like this:

public class StartingClass
{
    private static StartingClass _instance;
    public CancellationTokenSource cts = new  

           CancellationTokenSource();

    public static StartingClass Instance
    {
        get 
       
            return _instance ?? (_instance = new StartingClass()); 
        }
    }

    public bool Stopped { get; private set; }

    public void Start()
    {
        ThreadPool.QueueUserWorkItem(new WaitCallback(action =>
        {
            CancellationToken token = (CancellationToken)action;
            while (!token.IsCancellationRequested)
            {
                var dataProcessingObject = new 
                        DataProcessingObject(action);
                dataProcessingObject.PerformProcess();

                if (!token.IsCancellationRequested)
                {
                    Thread.Sleep(5 * 1000); // wait 5 seconds
                }
            }

            Stopped = true;

        }), cts.Token);    }

    public void Stop()
    {
        cts.Cancel();
    }
}

As you can see in the above code, I have replaced the boolean value of_stopAgent with the new “cts” variable.  The “action” object is passed in to the DataProcessingObject as a constructor.  The DataProcessingObject class looks like this:

public class DataProcessingObject
{
    private CancellationToken token;

    public DataProcessingObject(object action)
    {
        token = (CancellationToken)action;
    }

    public void PerformProcess()
    {
        // do something here
        using (var dataObject = new CreateTempFile())
        {
            using (StreamWriter writer = new StreamWriter(dataObject.TempFile, true))
            {
                for (int i = 0; i < 5000; i++)
                {
                    writer.WriteLine(“Some test data“);

                    // check for cancel signal
                    if (token.IsCancellationRequested)
                    {
                        return;
                    }

                    // this is not necessary (only for this demonstration
                    // to force this process to take a long time).

                    Thread.Sleep(1000);
                }
            }
        }
    }
}

A new CancellationToken is created inside the DataProcessingObject.  This will receive the cancellation request from the calling method from the action object.  Inside the DataProcessingObject, the “IsCancellationRequested” boolean flag must be watched to determine if a long running process must end and exit the object.  This will provide a clean exit (due to the fact that the CreateTempFile() object will be exited before the program shuts down and delete any temp files created.

Download the Code

You can go to my github account and download the entire working example by clicking here.



 
 

 

Generating a Custom Session Token

Summary

In this post I’m going to demonstrate how to generate a safe token for a custom session module.

The Session Token

When a user logs into a website, there needs to be a way to keep track of who the person is after the authentication occurs.  One method is to give the user a token after they log in and carry it around in a hidden field.  Another method used by Microsoft is to embed the token in the URL itself.  The token can also be placed in the authenticate field of the header.  Last, and most common the token can be stored in a local cookie that is non-persistent (in other words it goes away when the browser closes, unless you’re using Chrome).  For this blog post, it doesn’t really matter where the token is stored and used, I’m going to discuss the security of the token itself.

The first requirement is that you need to issue a unique token every time someone logs in and maintain a relationship between the user and the token (most likely in your database).  There was a security breach a few years ago involving a secure website that use the account number as their token (I can’t remember the company off the top of my head).  They also used it in the URL, which means that anybody looking over your shoulder can get your account number.  In addition to this problem, I’m betting that their account numbers were sequentially issued.  So a user could potentially log in, then modify the number in the URL and navigate around someone else’s account.  Probably one of the worse implementations I’ve ever encountered.

So we need a unique key and the first thing that comes to mind is the GUID.  Sounds good.  In C# we can get a new GUID quickly.  We can strip out the dashes if we like and just treat it as a string.  One limitation to the GUID is that they are not unique if you take a subset of the GUID.  In other words, you’ll need the whole thing.  There is, however, another problem: The GUID sequence is predictable.  The GUID is also hex, not really a string of characters, so the permutations is much smaller than it appears.

While I was reading about GUIDs, I came across this article at stack overflow:

Generating random string using RNGCryptoServiceProvider 

This particular algorithm uses the RNGCryptoServiceProvider() to get a encrypted random number.  The algorithm cited is supposed to generate 40 character random strings that are very distant from each other.  So the test I want to perform uses the Levenshtein distance algorithm between each random generated string.

When I run 50,000 random strings comparing a string with it’s previous string I get a minimum difference in letters of 34, and an average of 39.  That means that at least 34 of the 40 letters were different as a minimum and on average at least 39 letters will be different.

Where to Get the Code

You can download the sample code from my GitHub account here and modify the parameters.
 

 

Web Sessions Stored in SQL

Summary

If you have ever worked with sessions in a web application using multiple languages you’ll know that you need to do some special configuration to get the session data to be shared across all languages.  Another issue involves the problem of cycling or resetting an IIS server while using the default session settings.  Basically, your sessions for everyone using that IIS server will get dumped.  One way around this problem is to use SQL Server to store the session data.  Unfortunately, Classic ASP cannot save sessions to a SQL server.  In this post I’m going to describe how the raw session data is stored and how you can serialize and deserialize this data easily.


The Session Data

In an earlier post of mine, I talked about storing session data in MS SQL Server.  The data is stored in the tempdb table called ASPStateTempSessions.  The actual session data is stored in either the SessionItemShort or the SessionItemLong field depending on if it is greater or less than 7k in size.  If you setup a C# program to use SQL Server sessions you can look at this data and you’ll see something like this:

0x14000000010001000000FFFFFFFF047661723112000000011054686973206973207468652064617461FF


This data represents a session with one item in the list:


Session[“var1“] = “This is the data“;

There is an object that behaves a lot like the Dictionary object and is available for .Net.  That object is called SessionStateItemCollection.  It’s part of System.Web.SessionState.  This object contains a serialize and a desearialize method to convert the data into binary data.  Here’s the code that can be used to serialize the same session data as above (assuming you’re using a console application):


SessionStateItemCollection Session2 = new SessionStateItemCollection();
Session2[“var1“] = “This is the data“;
Byte[] state = null;
using (MemoryStream ms = new MemoryStream())
{
    using (var bw = new BinaryWriter(ms))
    {
        Session2.Serialize(bw);

        ms.Flush();
        state = ms.ToArray();
    }
}

Console.WriteLine(BitConverter.ToString(state).Replace(““, “”));


Now look at the output that this produces:

01000000FFFFFFFF047661723112000000011054686973206973207468652064617461

Look similar?  That’s because it’s the same data, minus the first part and the last part.  You can ignore the “0x”.  That’s SQL’s prefix to signify hex data.  The prefix part:

140000000100


Represents several variables such as the session timeout (hex 14 is equal to 20 minutes).  There is also an extra hex “FF” at the end of the whole string.  To make your output look identical you can change your code to look something like this:

SessionStateItemCollection Session2 = new SessionStateItemCollection();
Session2[“var1“] = “This is the data“;
Byte[] state = null;
using (MemoryStream ms = new MemoryStream())
{
    using (var bw = new BinaryWriter(ms))
    {

        bw.Write((int)20); // session timeout time
        bw.Write((bool)(Session2.Count > 0)); // state
        bw.Write((bool)false); // (ignored)

        Session2.Serialize(bw);

        bw.Write((byte)0xff); 

        ms.Flush();
        state = ms.ToArray();
    }
}


Console.WriteLine(BitConverter.ToString(state).Replace(““, “”));



Now run your program and see what is outputted.  It’s identical.

When you deserialize, you’ll have to remember to remove the first int, boolean and boolean.  The last byte can be ignored, the desearlize method ignores it:

using (MemoryStream ms = new MemoryStream(state))
{
    using (BinaryReader br = new BinaryReader(ms))
    {
        br.ReadInt32();
        bool sessionDataExists = br.ReadBoolean();
        br.ReadBoolean();

        if (sessionDataExists)
        {
            Session2 = SessionStateItemCollection.Deserialize(br);
        }
        else
        {
            Session2 = new SessionStateItemCollection();
        }
    }
}

Take note that the Desearialize() method is a static method, unlike the Serialize() method.

From this information and the information in the previous post on creating a COM module, you should have enough pieces to put together a session object that can be used in Classic ASP.  Then you can store your session variables in the same exact storage place that C# and VB.Net use.  

 

Creating a COM object for Classic ASP (Part 2)

Summary

In this blog post I’m going to expand on my last post about COM objects and design a COM “wrapper” for a dictionary.  This will demonstrate the use of properties, passing parameters and the indexer.

The MyDictionary Class

The code that I’m going to use in my COM module will be something like this:

public interface IMyDictionary
{
    object this[string key] { get; set; }
}

public class MyDictionary : IMyDictionary
{
    private Dictionary<string, object> Vars = new Dictionary<string, object>();

    public object this[string key]
    {
        get
        {
            return Vars[key.ToLower()];
        }
        set
        {
            Vars[key.ToLower()] = value;
        }
    }
}

As I mentioned in the summary, this is nothing more than a wrapper for the basic functionality of the Dictionary object using a string indexer and saving and returning an object data type.  Technically, there is already a dictionary COM object available for Classic ASP, but I wanted to demonstrate how this functions.

Next, follow the steps of the last blog post (or download the final version at the bottom of this blog post) and use it in ASP.

Here’s a sample ASP program:

<%
Dim MyComObject
Dim MyText

Set MyComObject = Server.CreateObject(“ComDictionary.MyDictionary”)

MyComObject(“testvar”) = “test data”

MyText = MyComObject(“testvar”)

%>
<html>
<head></head>
<body>
    <%=MyText %>
</body>
</html>

When you execute your program, you’ll see “test data” appear in the browser.


Adding Methods to Your Object

By now you’re probably getting the idea that setting up a COM object is the hardest part.  After that, it’s just C# code to add to your object and ASP code to call the object.  So let’s add a clear function to clear all the data in our dictionary:

public void Clear()
{
    Vars.Clear();
}

And add the interface definition to your interface section:

void Clear();

Restart your IIS and rebuild your COM solution.  Now change your ASP program to look like this:

<%
Dim MyComObject
Dim MyText

Set MyComObject = Server.CreateObject(“ComDictionary.MyDictionary”)

MyComObject(“testvar”) = “test data”

MyComObject.Clear

MyText = MyComObject(“testvar”)

%>
<html>
<head></head>
<body>
    <%=MyText %>
</body>
</html>

If you run it you’ll notice that it blows up.  Some of you probably already know what the “issue” is.  It’s the fact that an item is not in the dictionary, and it’s crashing at the “get” property.  So let’s fix that.  Chang your “get” property to look like this:

get
{
    if (Vars.ContainsKey(key.ToLower()))
    {
        return Vars[key.ToLower()];
    }
    else
    {
        return null;
    }
}


Now reset your IIS and rebuild your COM solution.  You’ll notice that there is no output.  That’s because you cleared the variable that was set in the dictionary and the subsequent read of that variable produced nothing.


Where to get the Code

You can visit my GitHub account and download this sample by clicking here.  I have included the home.asp file in the root directory.