The Trouble with Legacy Code

It’s been a long time since I wrote about legacy code.  So I’m going to do a brain-dump of my experience and thoughts on the subject.

Defining Legacy Code

First, I’m going to define what I mean by legacy code.  Many programmers who have just entered the industry in the past 5 years or less view legacy code as anything that was written more than a year ago or code that was written in the previous version of Visual Studio, or the previous minor version of .Net.  When I talk about legacy code, I’m talking about code that is so old that many systems cannot support it anymore.  An example is Classic ASP.  Sometimes I’m talking about VB.Net.  Technically, VB is not a legacy language, sometimes it is.  In the context of VB.Net I’m really talking about the technique used to write the code.  My experience is that Basic is a language that is picked up by new programmers who have no formal education in the subject or are just learning to program for the first time.  I know how difficult it is to ween yourself off your first language.  I was that person once.  Code written by such programmers usually amounts to tightly coupled spaghetti code.  With all the accessories: no unit tests, ill defined methods treated like function calls, difficult to break dependencies, global variables, no documentation, poorly named variables and methods, etc.  That’s what I call legacy code.

The Business Dilemma

In the business world the language used and even the technique used can make no difference.  A very successful business can be built around very old, obsolete and difficult to work with code.  This can work in situations where the code is rarely changed, the code is hidden behind a website or the code is small enough to be manageable.  Finally, if the business can sustain the high cost of a lot of developers, QA and other support staff, bad code can work.  It’s difficult to make a business case for the conversion of legacy code.

In most companies software is grown.  This is where the legacy problem gets exponentially more costly over time.  Most of the cost is hidden.  It shows up as an increased number of bugs that occur as more enhancements are released (I’m talking bugs in existing code that was disturbed by the new enhancement).  It shows up as an increase in the amount of time it takes to develop an enhancement.  It also shows up as an increase in the amount of time it takes to fix a bug.

Regression testing becomes a huge problem.  The lack of unit testing means the code must be manually tested.  Automated testing with a product like Selenium can automate some of the manual testing, but this technique is very brittle.  The smallest interface change can cause the tests to break and the tests are usually too slow to be executed by each developer or to be used with continuous integration.  

What to do…

Add Unit Tests?

At first, this seems like a feasible task.  However, the man-hours involved are quite high.  First, there’s the problem of languages like Classic ASP.  Unit tests are just not possible.  For code written in VB.Net, dependencies must be broken.  The difficulty of breaking dependencies is that refactoring can be complicated and cause a lot of bugs.  It’s nearly impossible to make a business case to invest thousands of developer hours into the company product to produce no noticeable outcome for the customer.  Even worse, is if the outcome is an increase in bugs and down-time.  The opposite of what is intended.


Convert Code?

Converting code is also very hazardous.  You could theoretically hold all enhancements for a year, and throw hundreds of programmers at the problem of rewriting your system in the latest technology with the intent to deliver the exact user experience currently in place.  In other words, the underlying technology would change, but the product would look and feel the same.  Business case?  None.

In the case of Classic ASP there are a couple of business cases that can be made for conversion.  However, the conversion must be performed with minimal labor to keep costs down and the outcome must be for the purpose of normalizing your system to be all .Net.  This makes sense if your system consists of a mix of languages.  The downside of such a conversion is the amount of regression testing that would be required.  Depending on the volume of code your system contains, you could break this into small sections and attack it over time.

One other problem with conversion is the issue of certification.  If you are maintaining medical or government software that requires certification when major changes take place, then your software will need to be re-certified after conversion.  This can be an expensive process.


Replace when Possible?

This is one of the more preferred methods of attacking legacy code.  When a new feature is introduced, replace the legacy code that is touched by the new feature with new code.  This has several benefits: The customer expects bugs in new features and the business expects to invest money in a new feature.  The downside of using only this technique is that eventually, your legacy code volume will plateau because of web pages that are little used or are not of interest for upgrading (usually it’s the configuration sections that suffer from this).

A downside to this technique is the fact that each enhancement may bring new technologies to the product.  Therefore, the number of technologies used over time grows.  This can be a serious liability if the number of people maintaining the system is small and one or more decide to move on to another company.  Now you have to fill the position with a person that knows a dozen or more odd technologies or the person to be hired will need a lot of time to get up to speed.


The Front-End Dilemma

Another issue with legacy code that is often overlooked is the interface itself.  Over time, interfaces change in style and in usability.  Many systems that are grown end up with an interface that is inconsistent.  Some pages are old-school html with javascript, others use bootstrap and AngularJS.  Many versions of JQuery are sprinkled around your website.  Bundling is an add-on if at all.  If your company hires a designer to make things consistent looking, there is still the problem of re-coding the front-end code.  In Classic ASP, the HTML code is always embedded in the same source file as the Javascript and VB Script.  That makes front-end conversion into a level 10 nightmare!  .Net webpages are not picnic either.  In my experience VB.Net webpages are normally written with a lot of VB code mixed in the front-side code instead of the code-behind.  There are also many situations where code behind emits html and javascript to allow logic to decide which code to send to the customer’s browser.


The Database Dilemma

The next issue I want to mention is the database itself.  When Classic ASP was king in the world of developing Microsoft product based websites, the database was used to perform must of the heavy lifting.  Websites did not have a lot of power and MS SQL had CPU cycles that could be used for processing (most straight database functions tax the hard drive but leave the CPU idle).  So many legacy systems will have the business logic performed in stored procedures.  In this day and age, it becomes a license cost issue.  As the number of customers increase, instances of databases must increase to handle the load.  Web servers are much cheaper to license than SQL servers.  It makes more sense to put the business logic in the front end.  In today’s API driven environment, this can be scaled to provide CPU, memory and drive space to the processes that need them the most.  In legacy systems, the database is where it all happens and all customers must share the misery of one heavy-duty slow running process.  There is only one path for solving this issue.  New code must move the processing to a front-end source, such as an API.  This code must be developed incrementally as the system is enhanced.  There is no effective business case for “fixing” this issue by itself.

As I mentioned, a lot of companies will use stored procedures to perform their back-end processing.  Once a critical mass of stored procedures have been created, you are locked into the database technology that was chosen from day one.  There will be no cost effective way to convert an MS SQL database into Mongo or Oracle or MySQL.  Wouldn’t it have been nice if the data store was broken into small chunks hidden behind APIs?  We can all dream right?


The Data Center Dilemma

Distributed processing and scalability are the next issue that come to mind.  Scaling a system can consist of adding a load-balancer with multiple web servers.  Eventually, the database will max out and you’ll need to run parallel instances to try and split the load.  The most pain will come when it is necessary to run a second data center.  The decision to use more than one data center could be redundancy or it could be to reduce latency to customers located in a distant region.  Scaling an application to work in multiple data centers is no trivial task. First, if fail-over redundancy is the goal then the databases must be upgraded to a enterprise licenses.  Which increases the cost of the license, but also doubles that cost because the purpose is to have identical databases at two (or more) locations.

Compounding the database problems that will need to be solved is the problem of the website itself.  More than likely, your application that was “grown” is a monolithic website application that is all or nothing.  This beast must run from two locations and be able to handle users that might have data at one data center or the other.  

If the application was designed using sessions, which was the prevailing technology until APIs become common, then there is the session fail-over problem.  Session issues will rear their ugly head when a web-farm is introduced, but there are cheap and dirty hacks to get around those problems (like fixing the incoming ip to a web server to prevent a user from going to another web server after they log in).  Using a centralized session store is a solution to a web farm.  Another solution is the use of a session-less website design.  Adapting a session-based system to session-less is a monstrous job.  For a setup like JWT, the number of variables in a session must be reduced to something that can be passed to a browser.  Another method is to cache the session variables behind the scenes and pass a token to the browser that identifies who the user is.  Then the algorithm can check to see if the cache contains the variables that match the user.  This caching system would need to be shared between data centers because a variable that is saved from a web page would be lost if the user’s next request was directed to the other data center.  To get a rough idea of how big the multi-datacenter problem is, I would recommend browsing over this article: 

Distributed Algorithms in NoSQL Databases


The Developer Knowledge Dilemma

This is a really ugly problem.  Younger developers do not know the older languages and they are being taught techniques that require technologies that didn’t exist 10 years ago.  Unit testing is becoming an integral part of the development process.  Object oriented programming is used in almost all current languages.  This problem exposes the company to a shortage of programmers able to fix bugs and solve problems.  Bugs become more expensive to fix because only experienced programmers can fix them.  Hire a dozen interns to fix minor issues with your software?  Not going to happen.  Assign advanced programmers to fix bugs?  Epic waste of money and resources.  Contract the work to an outside company?  Same issues, expensive and difficult to find the expertise.


Conclusion

My take on all of this is that a company must have a plan for mitigating legacy code.  Otherwise the problem will grow until the product is too expensive to maintain or enhance.  Most companies don’t recognize the problem until it becomes a serious problem.  Then it’s somewhat late to correct and corrective measures become prohibitively expensive.  It’s important to take a step back and look at the whole picture.  Count the number of technologies in use.  Count the number of legacy web pages in production.  Get an idea of the scope of the problem.  I would recommend keeping track of these numbers and maybe compare the number of legacy pages to non-legacy pages.  Track your progress in solving this problem.

I suspect that most web-based software being built today will fall into the MVC-like pattern or use APIs.  This is the latest craze.  If developers don’t understand the reason they are building systems using these techniques, they will learn when the software grows too large for one data center or even too large for one web server.  Scaling and enhancing a system that is broken into smaller pieces is much easier and cheaper to do.

I wish everyone the best of luck in their battle with legacy code.  I suspect this battle will continue for years to come.

 

Leave a Reply