Programming the GAL22V10

In previous blog posts I showed how the GAL16V8 operated and how to program it (see here and here).  In this blog post I’m going to discuss the differences between the 16V8 and the 22V10. Here’s the specification sheet for the GAL22V10: Lattice GAL22V10 Specifications.

On first inspection, both devices look identical, except for the increased number of inputs and outputs.  The specifications are straightforward once you know what to expect.  If you’re familiar with the 16V8 and have never used the 22V10, there are some confusing differences.

Here’s the full fuse map for the GAL22V10:

0 - 5807 matrix

5808 S0 for OLMC 0 (active low/high)
5809 S1 for OLMC 0 (registered/combinatorial)

5810 S0 for OLMC 1
5811 S1 for OLMC 1

5812 S0 for OLMC 2
5813 S1 for OLMC 2

5814 S0 for OLMC 3
5815 S1 for OLMC 3

5816 S0 for OLMC 4
5817 S1 for OLMC 4

5818 S0 for OLMC 5
5819 S1 for OLMC 5

5820 S0 for OLMC 6
5821 S1 for OLMC 6

5822 S0 for OLMC 7
5823 S1 for OLMC 7

5824 S0 for OLMC 8
5825 S1 for OLMC 8

5826 S0 for OLMC 9
5827 S1 for OLMC 9

5828 - 5891 Signature

No PTD Fuses

You’ll notice that there are no PTD (Product Term Disable) fuses.  This device doesn’t have any.

No Common Mode Bits

Next, the mode bits are missing (the GAL16V8 had 3 modes controlled by fuses 2192 and 2193).  The OLMC modes for the GAL22V10 are built into the S0 and S1 bits.  There are only two modes: Registered and Combinatorial, controlled by the S1 bit.  Each OLMC can be programmed separately, which means that you can designate which OLMCs are Registered and which are Combinatorial.  The S0 bit controls active high and active low outputs for each OLMC.

Variable Rows per OLMC

The next complication to this device is that the matrix of fuses that control the AND gates is variable in the number of rows.  The GAL16V8 had a simple arrangement of 8 rows for each OLMC.  In the 22V10, there are a variable number of rows for each OLMC starting with 8, then incrementing by 2 for each OLMC until 16 and then decrementing by 2 until the last OLMC has 8 inputs.  You can see the number of inputs for each OLMC in the functional block diagram:

If you have an equation with 13 terms like this arbitrary example:

OUT = A*B + C + /B + A*/C + D*E + C*D + A*D*E + A*/B*D + B*/D + /B*C*E + C*E*F + E*/F + /E*C*D

You’ll need to make sure it lines up with an OLMC that has more than 12 rows.  This equation will not fit on OLMC 0, 1, 2 or OMLC 7, 8 or 9.  What this means is that you will have to rearrange your output pins for circuits that use more than 8 rows because each OLMC is tied to a specific output pin.

The number of Fuses Per Row is 44

Also, since there are more inputs, then there are also more columns in the matrix.  The number of columns is 44, which is not a round binary number.  The number of columns is determined by the total number of inputs and feedback wires.  Each input and feedback takes 2 columns (the signal and the inverse of the signal.

Preset and Reset Lines

Two extra lines can be used to control reset and set for all registers (assuming you use the registered mode).  The reset line causes the starting fuse of the first OLMC to be 44 instead of zero.  The purpose of the reset line is to reset all the registers used in the device.  You can connect any logic to the reset line to perform reset via a pin, or some combination of logic.  The same goes for the preset line, which is the line that is positioned after the last matrix row starting at fuse 5764 (row 131).

Other Differences

The clock pin is still pin 1 as in the GAL16V8.  The difference is that in the GAL16V8, the clock pin is not used at all in modes that don’t use the registers.  So pin 1 feeds fuse columns 2 and 3.  In the GAL22V10, the clock can control zero or more registered outputs because registered outputs can be selected per OLMC.  Therefore the clock pin is wired to fuse columns 0 and 1 and to the register clock inputs.

The active high/low logic is performed on the output of the D-flip/flop of the GAL22V10, where it is on the input of the GAL16V8 (controlled by an XOR gate).

Using PALASM for DOS

PALASM4 for DOS is still around.  You can to this website and scroll to the bottom to download the RAR file: S100 Computers.  This article is rather old, but it still works.  Here’s the steps I took to get this working:

  1. Make sure you have DOSBOX installed.
  2. Download the RAR file from the article above.
  3. You can use 7-zip to unzip the RAR file.  Right-click on the RAR file and use extract files.  That will maintain the directory structure.  I used d:\palasm.
  4. Start DOSBOX.
  5. Type: “MOUNT c d:\palasm
  6. Type: C:
  7. Type: SET PALASM=c:\
  8. Type: PALASM

Now you should see this screen (after the intro screen):

Select “Retrieve existing design” and type in your filename (I’m assuming you created a file, or you can copy one of the .pds files in the EXAMPLES directory.  Hit the F10 button, then use the right-arrow to select the RUN menu and hit enter (Compilation is the first choice).  Then just hit F10 to compile your PDS file.  You’ll see some processing and then it should end with no errors or warnings:

You can hit ESC to return control to the menus.  Switch back to your directory in windows and you can see that an XPT and JED file were created.  Open your JED file to see the fuse map results:

Finally…

PALs and GALs are obsolete and the digital world has moved onto FPGAs.  These devices are still available and are still useful for hobby purposes.  They can be purchased through Jameco or DigiKey.  PALASM is also obsolete, hence the reason that it only exists in FORTRAN and DOS versions.  If you’re still using these devices, leave me a comment or hit the like button.

 

Building an ALU From EPROM – The Circuit

When I wrote my blog post about building an ALU from an EPROM, I intended to use an EPROM from my old bag of parts.  Unfortunately, none of my old UV erasable PROMs worked with the programmer.  I’m assuming they are old and not usable.  So I purchased a pair of 28C64’s from Jameco Electronics.  These are Electrically Erasable PROMs, so I don’t need to drag out my UV light and wait 30 minutes (or so) for them to erase before reprogramming them.

After I programmed the EEPROM, I had to make a diagram of the pins.  The pin out for the memory chip is organized by address lines and data lines and I’m pretending that the address lines are inputs and the data lines are just outputs (technically they are bi-directional).  Here’s a diagram of what I ended up with:

Technically, I can re-arrage the pins by reprogramming the data and pretending that pins 21, 23-25 are carry in and the function selectors.  Then all the A0-A3 and B0-B3 could be arranged on the left side of the chip.  It really only matters if you are concerned about circuit board layout.  At the moment, I just want to show that it can be done.

Next, I decided to use my 7-segment hex display driver with a 7-segment display to show the output from F0-F3.  This took a bit of extra wiring, but it’s easier to read than 4 LED lights.  Here’s the chip pinout using the GAL16V8 from this blog post (click here):

All that is needed is a breadboard, resistors, a 7-segment display and a bunch of wires.  This is my quick-and-dirty circuit:

For this circuit, I set S0-S2 to the ADD mode (S0=H, S1=H, S2=L).  The small chip in the lower right is the GAL that I programmed in an earlier article.  I also set the carry in as H (represents zero) and set A0 and B0 to H, with all other An and Bn to L.  As you can see 1 + 1 = 2.

One of the reasons I’m demonstrating this idea of using a ROM to represent a circuit is that a simple logical circuit can be temporarily represented as a memory chip.  Then a real circuit can be designed to be substituted.  I have not tested all possible inputs and outputs of this circuit and, more importantly, I have not tested the limits of the speed that this circuit would operate it.  I already know that the answer to my speed question is going to be “disappointingly slow”.

Purpose

Knowing that this will be a slow circuit, what’s the purpose?  The first purpose would be to create a prototype of a circuit that you plan to build.  If you’re building a machine that will require some programming, then it might be best to create a slow version of the machine so another programmer can create and test the software before the machine is complete.

This type of circuit design can also be used for educational purposes.  You can prepare EEPROMs with the circuits you’ll need for class instruction to support your lecture.  The EEPROM represents a black box of the circuit that your class can use without actually building a complete circuit.  In this instance an ALU.  The circuit could then be reprogrammed for a different purpose in a future lecture.

 

Vintage Hardware – Arcade Games

In my last Vintage post I talked about the first commercially available fully assembled PCs and ended with the Space Invaders video game console.  In this blog post I’m going to talk about a few other video games.

Galaga

I spent so many quarters on Galaga that I probably bought the company owner his/her yacht.  I went searching around the Web looking for schematics (which didn’t take too long) and stumbled onto this site (click here).  The schematic for Galaga is a PDF that you can download by clicking here.  When I first opened the schematic I immediately noticed that the CPU board contained three Z-80 CPUs.  These guys were serious!  At that time, this game probably cost a lot to manufacture.

You can learn a lot about the original video game from people who collect and repair these games.  Here’s a website that has some interesting material on Galaga: Galaga Information.  The best site on the design of the game is at the Computer Archeology site.  This site has the assembly code from the EPROMs used by all 3 CPUs and indicates that CPU1 and 2 control the game while CPU3 is the sound processor.  All three processors share 8k of static RAM (using 2147 memories).

There are custom chips on the boards that are used for various functions.  One function is the bus driver chips that interface between each CPU and the common data bus.

There are three of these chips used and I’m assuming that the logic connecting to them is used to interlock between the three so that only one can access the main bus at a time.  What I’m uncertain of is how the bus is shared.  There is a 6Mhz clock signal that is fed into all three chips, so maybe each CPU gets 1 or 2 clock cycle each turn?  The Z80 chips are normally clocked at 1Mhz, so it would make sense to run the bus at 3x or in this case 6x the CPU frequency and then do a round-robin.  Each CPU can be programmed as though it had complete control of the address and data bus.

For information on the custom chips, I found this site: The Defender Project.

Finally, here’s a blog describing an arcade emulator that Paolo Severini built using javascript:

Galaga: an Arcade machine emulator for Windows and HTML5

Defender

This video game was a horizontal shooter game with a video resolution of 360 x 240 pixels at 16 colors from a palette of 256 total colors (according to the manual, which can be found here). Here’s a snippet of the color RAM circuit:

The TTL 7489 is a 16 bit memory used to store the palette information.  You can see where the 8 outputs from the ram (4 bits from each chip) feed the analog Red, Green and Blue circuits used by the monitor.  It appears that the blue gets the least amount of control with only 2 bits, or 4 levels, while the Red and Green receive 3 bits or 8 levels of information.  The 74LS257 at the left of the circuit diagram is a multiplexer.  The job of that circuit is to allow the microprocessor to access the 7489s (and use the MPU data bus to load data) or allow the video circuitry to access the memory to display on the video monitor.  The microprocessor can only write to the memory during a monitor blanking operation.

At the top of the Red, Green, Blue amplifier circuits is another transistor.  This transistor shuts off the power to all three amplifiers and is controlled from the horizontal blank signal.  If you’re not familiar with the old video tubes, basically, there is a beam of electrons that are fired at the front of the screen.  The inside of the tube is coated with phosphor and it glows when electrons hit it.  The beam is “steered” by magnets on the neck of the tube.  The beam is moved in a horizontal line pattern starting from the top left of the screen and ending at the bottom right.  When the beam returns to light up the next line of phosphor, it has to be shut off, otherwise an angled beam would be drawn on the screen.  The beam is also shut off when it returns to the top left from the bottom right.

Unlike Galaga, Defender uses only one CPU and it’s a 6809 processor.  The game program is stored in ROM like other video game boards (no need for a hard drive).   There are two versions of Defender and both versions contained 26k of ROM (in different configurations of 4k and 2k ROM chips).  There are three banks of RAM used for video memory and scratch pad memory.  All RAM used is the 4116 dynamic RAM organized as 16k by 1 bit.  If you look at the schematic it appears that banks 1 and 2 are just 8 chips (4L-4S), but the diagram represents two chips per location (5L-5S).  Bank 3 is wired a bit different from banks 1 and 2.  The data bus is connected through a multiplexer.

The video RAM is fed into 4 8-bit serial shift registers.  All three banks of memory go into these shift registers at one time.

The TTL 74165 is a parallel load, serial out shift register circuit.  According to the circuit above, 6 bits are loaded at a time into each shift register.  Two bits from each bank per shift register.

If you read through the manual, you’ll stumble onto the E and Q generator.  These are 1Mhz clock signals that are out of phase from each other.  The purpose is to interleave the CPU and video access to the video memory.  During one clock cycle the CPU (call MPU in the manual) can read and write to the memory banks.  During the other clock cycle, the video timing circuit reads the memory banks into the shift registers.

Here are some other sites with information about Defender:

Asteroids and Battlezone

The unique aspect of these two games is that they use vector graphics.  If you don’t know what vector graphics are, you can think of it as a bunch of lines drawn on a screen instead of a grid of pixels.  The maximum number of lines that can be drawn on the screen depends on the speed that lines can be drawn in 1/60th of a second, which is the refresh rate of the video screen.  The circuitry for generating the images seems pretty simple: Run down a list of 2D vectors and output the x,y coordinates to the screen.  Video tubes operated on a X and Y or horizontal and vertical magnets that “steer” the electron beam to the phosphor coating in the front.  The circuitry to run this is a bit more complicated.  Basically the vector graphics are driven by a digital to analog circuit that is independent of the CPU.  This circuit calculates all the intermediate values of x and y for each line that is rendered on the screen.  There is a state machine that performs all of this logic, which is detailed in this document: The hitch-hacker’s guide to the Atari Digital Vector Generator.  The last page shows the states:

The DVG was a special purpose processor built out of discrete TTL components in order to run a vector graphics system.  The DVG had 7 operation codes:

  1. VCTR – draw long vector
  2. LABS – load absolute x,y coords
  3. HALT
  4. JSRL – jump to subroutine
  5. RTSL – return from subroutine
  6. JMPL – jump
  7. SVEC – draw short vector

There is also a “Z” axis circuit.  This determines if the beam is on or off and there are also different intensities to represent different shades.  Both games are black and white, so there is no color.  Here’s the intensity scaling circuitry:

The intensity has a range from 1volt to 4 volts and can be any of the 16 shades in-between. Scale0 through Scale3 are bits of a 4-bit word.  The bottom part of the circuit above is the blanking circuit.  This is controlled by the BLANK signal, or the BVLD which means beam valid.  If either signal indicates that the beam should be off, then Q8 will turn on and force Q9 to turn off and block any signal coming from the intensity circuit.

You can get a more detailed (and clean) schematic and description of the DVG by clicking here.

There’s an interesting article on the making of asteroids by Ed Logg, Howard Delman and Lyle Rains, the people who created the game (click here).  You can click here to download the schematics for Asteroids and click here to download the schematics to Battlezone.  If you want to know more about the software side of Asteroids you can go to the Computer Archeology site.  This site shows the data that the ROMs contained describing the vector shapes of things like the asteroids, the flying saucer, the ships, etc:

The CPU used for both games is the faster 1.5Mhz 6502A.  Only a single CPU is used for the game.  There is a small amount of RAM using 2114 static RAM chips.  There are two sets of ROMs used.  One set contains the vector shapes as described above (4k) and the other ROMs contain the program to run the game (8k?).

When these video games were designed, microcomputers were not very powerful and it required a PDP-11 to create the program that would be used by Asteroids and Battlezone.  This program was punched onto a paper tape and then loaded into an emulator.  Paper tape was like a cheap floppy disk.  Networks were not very common in the late 70’s, early 80’s, so a cheap method of copying programs was needed.  While PC began to use 5 1/4″ drives, there was also an 8″ drive that came out (the 8″ came out in the mid-70’s just before the 5 1/4″ drive).  Here is a very interesting article about the day to day work environment at Atari while working on coin-op arcade games: Pay No Attention to those Folks Behind the Curtain.  This article also includes emails from Jed Margolin, who worked for Atari and Atari Games for 13 years.

For those who have never seen paper tape or punch tape readers, they were used by the military in the 80’s.  The machine could punch 8 holes wide (there are other tape formats) to represent a byte of data:

The image above is from the crypto museum.  When the machine punched the tape the little holes were punched out at high speed like a paper punch tool.  This leaves a bunch of round circle “holes” that go into a collection bucket for emptying.  That bucket is called the “bit bucket”:

By Retro-Computing Society of Rhode Island – Own work, CC BY-SA 3.0

 

Get ASP.Net Core Web API Up and Running Quickly

Summary

I’m going to show you how to setup your environment so you can get results from an API using ASP.Net Core quickly.  I’ll discuss ways to troubleshoot issues and get logging and troubleshooting tools working quick.

ASP.Net Core Web API

Web API has been around for quite some time but there are a lot of changes that were made for .Net Core applications.  If you’re new to the world of developing APIs, you’ll want to get your troubleshooting tools up quickly.  As a seasoned API designer I usually focus on getting my tools and logging up and working first.  I know that I’m going to need these tools to troubleshoot and there is nothing worse than trying to install a logging system after writing a ton of code.

First, create a .Net API application using Visual Studio 2015 Community edition.  You can follow these steps:

Create a new .Net Core Web Application Project:

Next, you’ll see a screen where you can select the web application project type (select Web API):

A template project will be generated and you’ll have one Controller called ValuesController.  This is a sample REST interface that you can model other controllers from.  You’ll want to setup Visual Studio so you can run the project and use break-points.  You’ll have to change your IIS Express setting in the drop-down in your menu bar:

Select the name of the project that is below IIS Express (as shown in yellow above).  This will be the same as the name of your project when you created it.

Your next task is to create a consumer that will connect to your API, send data and receive results.  So you can create a standard .Net Console application.  This does not need to be fancy.  It’s just a throw-away application that you’ll use for testing purposes only.  You can use the same application to test your installed API just by changing the URL parameter.  Here’s how you do it:

Create a Console application:

Give it a name and hit the OK button.

Download this C# source file by clicking here.  You can create a cs file in your console application and paste this object into it (download my GitHub example by clicking here).  This web client is not necessary, you can use the plain web client object, but this client can handle cookies.  Just in case you decide you need to pass a cookie for one reason or another.

Next, you can setup a url at the top of your Program.cs source:

private static string url = "http://localhost:5000";

The default URL address is always this address, including the port number (the port does not rotate), unless you override it in the settings.  To change this information you can go into the project properties of your API project and select the Debug tab and change it.

Back to the Console application…

Create a static method for your first API consumer.  Name it GetValues to match the method you’ll call:

private static object GetValues()
{
	using (var webClient = new CookieAwareWebClient())
	{
		webClient.Headers["Accept-Encoding"] = "UTF-8";
		webClient.Headers["Content-Type"] = "application/json";

		var arr = webClient.DownloadData(url + "/api/values");
		return Encoding.ASCII.GetString(arr);
	}
}

Next, add a Console.Writeline() command and a Console.ReadKey() to your main:

static void Main(string[] args)
{
	Console.WriteLine(GetValues());

	Console.ReadKey();
}

Now switch to your API project and hit F-5.  When the blank window appears, then switch back to your consumer console application and hit F-5.  You should see something like this:

If all of this is working, you’re off to a good start.  You can put break-points into your API code and troubleshoot inputs and outputs.  You can write your remaining consumer methods to test each API that you wrote.  In this instance, there are a total of 5 APIs that you can connect to.

Logging

Your next task is to install some logging.  Why do you need logging?  Somewhere down the line you’re going to want to install this API on a production system.  Your system should not contain Visual Studio or any other tools that can be used by hackers or drain your resources when you don’t need them.  Logging is going to be your eyes on what is happening with your API.  No matter how much testing you perform on your PC, you’re not going to get a fully loaded API and there are going to be requests that are going to hit your API that you don’t expect.

Nicholas Blumhardt has an excellent article on adding a file logger to .Net Core.  Click here to read it.  You can follow his steps to insert your log code.  I changed the directory, but used the same code in the Configure method:

loggerFactory.AddFile("c:/logs/myapp-{Date}.txt");

I just ran the API project and a log file appeared:

This is easier than NLog (and NLog is easy).

Before you go live, you’ll probably want to tweak the limits of the logging so you don’t fill up your hard drive on a production machine.  One bot could make for a bad day.

Swashbuckle Swagger

The next thing you’re going to need is a help interface.  This interface is not just for help, it will give interface information to developers who wish to consume your APIs.  It can also be useful for troubleshooting when your system goes live.  Go to this website and follow the instructions on how to install and use Swagger.  Once you have it installed you’ll need to perform a publish to use the help.  Right-click on the project and select “Publish”.  Click on “Custom” and then give your publish profile a name.  Then click the “Publish” button.

Create an IIS website (open IIS, add a new website):

The Physical Path will link to your project directory in the bin/Release/PublishOutput folder.  You’ll need to make sure that your project has IUSR and IIS_IUSRS permissions (right-click on your project directory, select the security tab.  Then add full rights for IUSR and do the same for IIS_IUSRS).

You’ll need to add the url to your hosts file (c:\Windows\System32\drivers\etc folder)

127.0.0.1 MyDotNetWebApi.com

Next, you’ll need to adjust your application pool .Net Framework to “No Managed Code”.  Go back to IIS and select “Application Pools”:

Now if you point your browser to the URL that you created (MyDotNetWebApi.com in this example), then you might get this:

Epic fail!

OK, it’s not that bad.  Here’s how to troubleshoot this type of error.

Navigate to your PublishOutput folder and scroll all the way to the bottom.  Now edit the web.config file.  Change your stdoutLogFile to “c:\logs\stdout”

Refresh your browser to make it trigger the error again.  Then go to your c:\logs directory and check out the error log.  If you followed the instructions on installing Swagger like I did, you might have missed the fact that this line of code:

var pathToDoc = Configuration["Swagger:Path"];

Requires an entry in the appsettings.json file:

"Swagger": {
  "Path": "DotNetWebApi.xml"
}

Now go to your URL and add the following path:

www.yoururl.com/swagger/ui

Next, you might want to change the default path.  You can set the path to another path like “help”.  Just change this line of code:

app.UseSwaggerUi("help");

Now you can type in the following URL to see your API help page:

www.yoururl.com/help

To gain full use of Swagger, you’ll need to comment your APIs.  Just type three slashes and a summary comment block will appear.  This information is used by Swagger to form descriptions in the help interface.  Here’s an example of commented API code and the results:

Update NuGet Packages

.Net Core allows you to paste NuGet package information directly into the project.json file.  This is convenient because you don’t have to use the package manager to search for packages.  However, the versions of each package are being updated at a rapid rate, so even for the project template packages there are updates.  You can startup your Manage NuGet Packages window and click on the “Updates” tab.  Then update everything.

The downside of upgrading everything at once is that you’ll probably break something.  So be prepared to do some troubleshooting.  When I upgraded my sample code for this blog post I ran into a target framework runtime error.

Other Considerations

Before you deploy an API, be sure to understand what you need as a minimum requirement.  If your API is used by your own software and you expect to use some sort of security or authentication to keep out unwanted users, don’t deploy before you have added the security code to your API.  It’s always easier to test without using security, but this step is very important.

Also, you might want to provide an on/off setting to disable the API functions in your production environment for customers until you have fully tested your deployment.  Such a feature can be used in a canary release, where you allow some customers to use the new feature for a few days before releasing to all of your customers.  This will give you time to estimate load capabilities of your servers.

I also didn’t discuss IOC container usage, unit testing, database access, where to store your configuration files, etc.  Be sure and set a standard before you go live.

One last thing to consider is the deployment of an API.  You should create an empty API container and check it into your version control system.  Then create a deployment package to be able to deploy to each of your environments (Development, QA, stage, production, etc.).  The sooner you get your continuous integration working, the less work it will be to get your project completed and tested.  Manual deployment, even for a test system takes a lot of time.  Human error being the number one killer of deployment efficiency.

Where to Get the Code

As always, you can download the sample code at my GitHub account by clicking here (for the api code) and here (for the console consumer code).  Please hit the “Like” button at the end of this article if this subject was helpful!

 

DotNet Core Target Framework Runtime Error

One of the common events in the new .Net Core is the crazy somewhat obscure errors that occur.  I was recently working with a Web API in Core.  When I created a publish profile for the API, I got this error:

So it looks like everything is falling apart.  Next I copied the following into Google to see if I can stumble onto a quick fix:

Can not find runtime target for framework '.NETCoreApp,Version=v1.0' compatible with one of the target run times:

After reading several posts on Stack Overflow, I discovered that the key to fixing this error is the fact that it’s looking for a specific run-time environment, and it was looking for “win7-x64”.  Inside my config.json file was no run-time environment and I had tried one of the Stack Overflow suggestions of adding this:

But the right run-time was this (which is listed in step 2 of the error message):

Which is exactly as it’s spelled in the error message.  I think the steps 1, 2 and 3 just add confusion to the error message, but that might be an error message for many possible problems and the compiler isn’t sophisticated enough to figure it out.  Anyway, here’s the Stack Overflow article describing how to fix this error:

Can not find runtime target for framework .NETCoreApp=v1 compatible with one of the target runtimes

 

Vintage Computer Hardware Design

Summary

In this blog post I’m going to talk about some computers that came out in the late 70’s and early 80’s.  These are some of the earliest home computers and utilized mostly 8-bit CPUs.  Schematics are available and I’ve done some deep-diving into the circuitry to see how these machines were created.  The first microcomputer I put my hands on was the Commodore Pet.  I have used commodore 64s, TRS-80s, an Apple IIc and I own a Macintosh.  I also used many variations of the PC XT.

The Apple I

I have never actually seen an Apple I computer.  According to my research about 200 units were sold.  This computer came mostly in kit form, but Steve Jobs sold 50 units to the Byte store fully assembled.  You can download the manual, which includes a schematic in the back, by clicking here.  When you look at the schematic you can see the 6502 processor and the ROM and RAM sections.  There are some jumpers on the board that allow you to use a 6800 CPU instead of the 6502.  They’re using an MK4096 memory for RAM.  That’s a dynamic RAM chip that is organized as 4096 x 1 bit.  So it takes 8 chips to represent 8k bytes of memory.  I count 16 chips in that row of memory (click for larger image):

So that’s 8k of RAM.   The PROMs are used for booting up the machine, the schematic shows two chips.  Note 11 indicates that they are 256 x 4 bits wide, so there’s 256 bytes of bootstrap program.  This computer didn’t ship with any device to store your programs on, so you would need to interface with something like a cassette drive or paper tape reader or something.  The competition for this computer when it was built was the IMSAI 8080 or the Altair 8800.  Both of which used front panel switches to input a program.  There is a video output circuit and a keyboard can be attached to the Apple I motherboard which gave it an advantage over the competition.

The Commodore Pet

Somewhere around 1978 or so my family visited an electronic engineer that we used to be neighbors of.  He had just bought a Commodore Pet (Personal Electronic Transactor).  I had never touched a computer before this and I was amazed at all the things it could do.  We played a lot of computer games that night.  The machine used an audio cassette tape to store programs onto.  This could take up to 30 minutes to load a program (at about 75 bits per second).  You can see the cassette tape machine next to the square keyboard:

Photograph by Rama, Wikimedia Commons, Cc-by-sa-2.0-fr, CC BY-SA 2.0 fr

The schematics for this machine can be found here.  This computer also uses the 6502 CPU.  The original PET used the MOS 6550 RAM, which is 1k x 4.  The first computers came with either 4k or 8k of memory and articles on-line indicate that the chips can be replaced with the 2114.  All of these RAM chips are static ram, no refresh logic was necessary but the chips were more expensive.  You can find more history on the PET at the wiki site by clicking here.

The TRS-80

The TRS-80 was built by Tandy/Radio-Shack and it used the Z80 CPU.  You can find a schematics inside the technical manual by clicking here.  The schematics show that this computer used the 2102 static RAM.  I’m surprised that they didn’t take advantage of the dynamic RAM refresh logic built into the Z80 CPU.  The initial memory of a TRS-80 was also 4k in size:

The computer can be upgraded by replacing the chips with bigger chips and re-configuring the selector wires.  You can see the 4k or 16k configuration jumpers in places on the schematic (see X71 blocks below):

The TRS-80 also stored programs using a tape drive.  In 1977 this was the cheapest medium to store data on.  The hard drive was far too expensive for personal computer use and the floppy was still a bit out of the price range of most people.  The 5 1/4″ floppy drive came onto the scene in 1978 (click here).

Space Invaders

Space invaders was one of the first video games when arcades become all the rage.  I was in high-school when arcades opened near me and I spent many quarters on these machines.  The space invaders game used the 8080 CPU and the schematic can be downloaded by clicking here.  This computer used the Intel 2107 memory which is a 4k x 1 dynamic RAM.  The board has 16 chips, so there is 8k bytes of memory.  When looking at the schematics for space invaders, you’ll notice that the hardware is designed specifically for the game itself (since that’s all it really does).  So there’s a synthesizer circuit for the explosions and missile sounds, etc:

There’s a special video circuit too.  Apparently the monitor is mounted sideways in the video game console so the aspect ratio is vertical.  So the program that runs actually plots the pixels sideways (so the aliens appear right-side up).  You can download the assembly language for this game by clicking here.  The EPROMs for this machine contain the running program and there are straps that allow between 4k to 64k of EPROM storage:

Apparently, the game can be upgraded in the field if Midway decided to update the program.  A local technician could pull the old chips off the board and insert new chips.  This, of course, never happened because arcades went out of style in what seemed like a nano-second.  I remember my favorite arcade disappearing only 3 or 4 years after they opened.

Here’s the coin input circuit:

I traced the route and it looks like it’s multiplexed (74153 is a multiplexer) and fed into another circuit on the motherboard.  I’m betting that this drives the interrupt line and there is probably a small program that increments the coin count.  Was that displayed on the screen?  Why yes it was (see “Credit”):

The arcade game uses a black and white display and the program renders one dot per bit using 7k of RAM (224  x 256 pixels).  The color that you see in the picture is a plastic cellophane overlay that was adhered to the screen.  The Wiki on Space Invaders has an interesting story on the challenges of building this game (click here).  Here’s an interesting excerpt from the Wiki:

Because microcomputers in Japan were not powerful enough at the time to perform the complex tasks involved in designing and programming Space Invaders, Nishikado had to design his own custom hardware and development tools for the game.[10][14] He created the arcade board using new microprocessors from the United States.[12] The game uses an Intel 8080 central processing unit, features raster graphics on a CRT monitor and monaural sound hosted by a combination of analog circuitry and a Texas Instruments SN76477 sound chip.[4][15][16] Despite the specially developed hardware, Nishikado was unable to program the game as he wanted—the Control Program board was not powerful enough to display the graphics in color or move the enemies faster—and he considered the development of the hardware the most difficult part of the whole process.[10][14] While programming the game, Nishikado discovered that the processor was able to render the alien graphics faster the fewer were on screen. Rather than design the game to compensate for the speed increase, he decided to keep it as a challenging game play mechanism.

That’s all for now.  I have a collection of other computers that I’ll talk about in a future post.  If you enjoyed this blog post, please hit the like button!

 

Building an ALU from an EPROM

This blog post is a “what-if” scenario, rather than a practical application.  Still, if EPROMs ever become super fast, this is a viable option.

I’ve discussed Arithmetic Logic Units in previous posts.  I have the circuit for the TTL 74381 and 74382 chips.  The circuit is easy to find on the internet and it’s included in the TTL logic book.  Unfortunately, the chips can not be purchased and are probably discontinued.  So I analyzed the circuit and was attempting to design a GAL equivalent.  Unfortunately, the GAL22V10 doesn’t have enough circuits to emulate the 74381.  I’ll need something more sophisticated.  I’m looking at CPLD devices and FPLAs.  These are probably overkill for an ALU, but I could code something that is 32 bits wide all on one chip.

So I scratched my head and thought, what if I used an EPROM?  First, I counted the inputs and outputs for the circuit.  Here’s the circuit:

The total number of inputs is 12 and, fortunately, the number of outputs is 8.  So this will fit really nice on an AMD 2732.  The 2732 is 4k x 8.  Doing the math on the inputs shows that 2^12 = 4096 or 4k.  So we’ll get an EPROM that has 12 address lines and 8 data lines.

Now for the bummer about EPROM memory… It’s slow.  Typically to the tune of 250ns.  Which is painful compared to the specs for the TTL 74381 running with a max delay time of 33ns.  Jameco has a One Time Programmable PROM that runs at 45ns.  That’s pretty close for an experimental circuit.  Making this a feasible stand-in for the TTL 74381 and 74382.

So now for the dirty part: Convert all inputs into outputs.  I was going to use my logic simulator to simulate all possible inputs, but it runs too slow.  The emulator was designed to study delays not pump out all combinations of inputs.  I had to create a new program to simulate my logic.  This turned out to be easy.

If you click on the diagram above, you’ll see that each gate is uniquely numbered.  That’s how I translated this circuit into my simulator.  For the quick logic I used boolean for inputs and outputs, then I created a run circuit method that ran the circuit for the given inputs and set the outputs accordingly.  Translating a diagram like this is easy because I purposely arranged my numbering to start from the inputs and work through the circuit sequentially until I reached an output.  So I started inputting logical statements like this:

bool gate0 = !S0;

This is for gate 0’s output.  Basically gate0 will invert the input S0 (which is a boolean variable I created as a getter/setter).  Continuing on:

bool gate1 = !gate0;
bool gate2 = !S1;
bool gate3 = !gate2;
bool gate4 = !S2;
bool gate5 = !gate4;
bool gate6 = gate4 && gate2 && gate1;

And so on, until I ended up with all the outputs like F0 equal to the exclusive nor of gate63 and gate 71.

I then copied my unit tests from my simulator and and translated all 0 and 5volts into false and true.  It took a little debugging but I quickly had the unit tests working and was satisfied with my results.  Next I wrote a console application to feed all the possible inputs.  At first, I thought I could just keep it stupid easy and just do nested loops, but that was a pain, so I decided to do one for loop consisting of the range of 0 to 4095 and then treat each bit of the integer as one of the inputs.  That was pretty easy.

My next task was to figure out what format my EPROM programmer saved its files in.  This turned out to be Intel Hex.  Never heard of it.  It was only lucky that the save box indicated that it was Intel Hex, otherwise, I would have had a tough time figuring out what all the addressing hex represented:

There is a wiki for Intel Hex.  It’s very descriptive and was handy in figuring out how to compute the checksum and addressing information.  So I coded that and spit out the data for my EPROM programmer.  Unfortunately, the EPROMs that I have are old and seem to be defective (the don’t take the program), so I ordered a new EEPROM from Jameco Electronics.  With the EEPROM I can erase the chip from my programmer instead of using my UV light.

Converting a circuit into an EPROM is feasible if there are no latching mechanisms, timing circuits or feedback paths.  In this case the 74381 is just straight logic, so it can be represented with memory very easy.  The 7-segment hex display decoder I did with the GAL16v8 could also be represented as an EPROM.  That circuit needed 8 outputs as well and it uses 6 inputs.  So a tiny 64 x 8 memory would work.

Where to Download the Code

You can download my code and generate your own Intel Hex file by going to my GitHub account here.  Click here to download the hex file I generated.

 

Legacy Code – Dealing with Classic ASP

Summary

I’ve written quite a few blog posts about legacy code.  In this blog post I’m going to discuss how you can deal with classic ASP, why you should deal with it sooner than later and some details of what to do.

Classic ASP

If you’re running a .Net shop, Classic ASP is about the most difficult monster to deal with.  Off the top of my head, I can list a few disadvantages to having one or more asp pages mixed in with your .Net code:

  • ASP executes under the IIS app pool pipeline type of “Classic” and will not work well with the “Integrated” managed pipeline mode.
  • ASP has it’s own session management.  If you mix ASP with .Net, you’ll end up with two different session storage mechanisms.
  • ASP does not have a compiler.  Bugs are only discovered during run-time.
  • ASP does not have a robust unit testing capability.
  • ASP is rarely written in a modular fashion.  Object oriented design can be used but rarely is.
  • ASP is not supported by Microsoft increasing the likelihood that a future version of IIS might not support it.
  • ASP contains config settings in the global.asa and does not recognize the web.config file.

The behavior of an ASP page is similar to JavaScript.  You can change it on the fly and not have to worry about re-compiling it.  This is only a minor advantage considering the fact that you must test all features in run-time.

Eliminating Classic ASP pages

Your first goal should be to eliminate these pages.  Here is a list of advantages to eliminating all ASP pages in your product:

  • Finding qualified software developers will be easier.  ASP knowledge is rare and becoming rarer.
  • Non structured ASP code is very difficult to work with.  Development time can be reduced by converting to .Net.
  • .Net code can have unit tests added.
  • Compile time bugs can be identified before code is released.
  • Memory and performance profiling tools work with .Net.
  • Refactoring tools work with .Net, reducing errors when variables are renamed.
  • You can combine your config settings to the web.config file.
  • Visual Studio will auto-indent VB.Net code reducing coding errors.

Some disadvantages of eliminating ASP pages in your product could be:

  • It costs money/man-hours to convert the code.  The amount can vary depending on the level of conversion.
  • Converting the code will cause bugs, translating to a degradation of customer satisfaction.

Levels of Conversion

There are different levels of converting code.  The level depends on how far you wish to go with the conversion.  The easiest level to eliminate ASP is to convert it directly to Visual Basic .Net.  No special translation, no real cleanup of code, just convert as directly as possible.  You’ll end up with the same spaghetti as before, except you can now compile it together with your existing code.  No new features are created when performing this type of conversion.

The next level is to convert and cleanup.  This level involves converting to VB.Net and then maybe consolidating your common functions (from ASP) with your existing common objects and methods.  This usually occurs if you have a database driven web application.  If you have a mixture of .Net and ASP code, you’ll end up with two sets of common database wrappers.  Merging these together will reduce your code and if your .Net objects are unit tested, you now have a more robust set of database functions.  This level of conversion is difficult and can take a lot of time because your common objects/functions are probably going to alien to each other.  There will be a lot of refactoring of pages to use the common .Net objects.  No new features are created in this type of conversion.

The next level is to convert into another language such as C#.  I’m assuming your system has a mixture of ASP and C# and possibly VB.Net.  If you plan to convert to C#, this will be a difficult task.  C# syntax alone is not close to ASP syntax like VB.Net is.  The hazard of doing your conversion this way is that you’ll cause a lot of bugs that your customers will notice without gaining any new features.

The next level of conversion is the replacement.  This is where you redesign and replace your ASP code with completely new code.  This is very long term and hazardous.  You’ll probably need to live with your ASP for years before you can replace the code with new systems.  The advantage of this method is that the conversion is buried into the work of creating new features.  Customers will attributes bugs with the new code (since they will be bugs in the new code), but there will also be difficulty with integrating with existing ASP legacy code.

Converting from ASP directly to VB.Net

Let’s pretend that you’ve settled on converting your ASP into .Net as quickly and as cheaply as possible.  Now you need to take inventory of your ASP pages and you’ll need to plan which pages you’re going to convert first.  I would recommend converting a few smaller pages just to get the feel of it.  You might want to determine what the maximum number of converted pages per release will be.  This will probably depend on developer effort and QA effort.  You could break it into systems and deploy one system of converted pages at a time.  If you have on-going enhancements, then ignore any pages that are about to be replaced by any new subsystem.  Convert those last if necessary.

Once you’ve identified your pages, you’ll notice that there are some common functions that are probably contained in included ASP pages.  Those will need to be converted to .Net first.  You should define a unique namespace for all of this legacy code so that your functions, which will be converted to objects and methods don’t collide with any existing .Net code.

Converting an ASP Page

Let’s get down to some details.  An ASP page typically consists of a collection of programming languages crammed into one source file with switches that indicate where one language starts and another stops.  Here’s an example:

<% Option Explicit %>
<!--#include virtual="/common/myutil.asp"-->
<%
Dim sSQL, rsTemp, dbConnect
Dim action, note, id

If Not checkPageRights("MyPage") Then Response.Redirect("/denied_access.asp")

id = Trim(Request("id"))

If Request("add")<> "" and trim(Request("id")) <> "" then

  '-- Adding a new user
  '-- some asp code here
End If

' Read list
sSQL = "query_for_list " & dbid(id)
executeSQLStatement sSQL, id, rsTemp, dbConnect

%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
  <title>List</title>

  <style type="text/css">
  body {
    margin: 5px 5px;
    background: #FFFFFF;
  }
  </style>
  <!--#include virtual="/js/jquery_stuff.js" -->
</head>
<body>


<table>
  <tr>
    <td>name</td><td>age</td>
  </tr>
  <% While NOT rsTemp.EOF %>
    <tr><td><%=rsTemp("name") %></td><td><%=rsTemp("age") %></td></tr>
  <% Wend %>
</table>

<script language='javascript' type='text/javascript'>
function ConfirmDel() {
  if (confirm("Are you sure you want to delete this record.  Press OK to delete."))
    return (true);
  else
    return (false);
  };
</script>

</body>
</html>

You’ll need to cut any Javascript and HTML and put it into the front-side page of a VB.Net web form.  Some code that works in ASP will not work with the VB.Net front-side page.  For that code, you’ll need to put the code in a method and call the method from the front-side code.  You might even need to go as far as writing the code from the code-behind page.

All functions will need to have parenthesis added to them.  In the “executeSQLStatement” will need to be converted to: “executeSQLStatement(sSQL, id, rsTemp, dbConnect)” in order to compile.

As I mentioned earlier, you’ll need to convert any included files first.  That means that the “myutil.asp” page at the top will need to be converted to VB.Net first.  You can convert the entire file, or just convert common functions that are used in this page.  The advantage of doing it peacemeal is that you’ll end up removing any dead-code while you’re converting.  You can also include your job of testing common code with the pages that you are converting.  As you continue to convert more difficult pages, you should have most of your common functions converted into objects/methods.

Here are a few conversions you’ll need to be aware of:

  • Convert any line wrapping “&_” into a “& _”.  VB.Net expects a space between the ampersand and the underscore.
  • Add parenthesis to your function calls if needed.
  • “Wend” converts to “end while” in VB.Net.
  • isNul() converts to “Is Nothing”.
  • “Date” converts to “DateTime.Now()”
  • Remove any “set” statements.
  • Add “.Value” to recordset variables.  rs(“lastname”) becomes rs(“lastname”).Value

Microsoft has a good site for conversion tricks: Converting ASP to ASP.NET.

Any ASP code will go into the code-behind page.  That would be this section of code:

Dim sSQL, rsTemp, dbConnect
Dim action, note, id

If Not checkPageRights("MyPage") Then Response.Redirect("/denied_access.asp")

id = Trim(Request("id"))

If Request("add")<> "" and trim(Request("id")) <> "" then

  '-- Adding a new user
  '-- some asp code here
End If

' Read list
sSQL = "query_for_list " & id
executeSQLStatement sSQL, id, rsTemp, dbConnect

You’ll need to make sure that any variables that are used in the front-side code are declared globally in the code-behind.  You’ll also need to determine the data types and add those.

After You Convert a Page

When you have completed your raw conversion, you’ll probably execute the page and see how it compiles and runs.  Fix any obvious bugs.  Does it run slower?  If so, then something is wrong.

I would recommend running a memory profiler at least once to see if there are any memory leaks.  Classic ASP can handle some memory management better than VB.Net and some things are worse.  Database connections are handled a bit better in ASP.  You’ll need to make sure your database connections are closing when your web page is finished.  If not, then you’ll need to track down where to put in a close statement.  I usually open up a studio window into MS SQL and execute the following query to determine how many connections are currently open:

SELECT
  DB_NAME(dbid) as DBName,
  COUNT(dbid) as NumberOfConnections,
  loginame as LoginName
FROM
  sys.sysprocesses
WHERE
  dbid > 0
GROUP BY
  dbid, loginame

Then run your web page a few times and see if the numbers increase.  You’ll need to have a local database that only you are using in order for this step to be effective.

I would also recommend purchasing a tool such as ReSharper.  You’ll discover variables that are not used.  When your page works, remove all the unused variables.  Reduce the clutter.

Upon completion of your page conversions you can change your IIS server to use the Integrated managed pipeline.  Make sure you perform a regression test after this has been switched.

Finally

If your system has hundreds of Classic ASP pages you can script some of the conversion.  I would make a list of common functions and then create a script or program that can search a designated source file and check for missing parenthesis.  Encode as many of the conversion rules as possible in your script to handle the bulk of the conversion before you move the code to your .Net source file.

Name your .Net source the same as your asp file (you’ll end up with extensions like .aspx, .aspx.designer.vb and .asp.vb which do not collide with .asp extensions).  Once you have replaced an ASP page, be sure to map out any links to that page in your application to the new page.  I would also recommend changing the ASP extension to something like .asp.done so you can catch any missed mappings in your application while you are testing.

 

Continuous Integration – Baby Steps

Introduction

In this blog post I’m going to talk top of the waves about a very large subject: Continuous Integration or CI.

Where to Start

CI is a process, but it’s not an all or nothing proposition.  There are levels of CI that can be achieved.  As the title of this blog post suggests, you can start small and build on your process.  The most difficult aspect of getting to a CI environment is the natural resistance of people who have been running your company for years.  This occurs because companies always start out small and software is easy when it’s small.  It’s more forgiving.  Manual deployment is not painful.  Unfortunately, by the time an organization discovers that they need to do something, the manual deployment process is at a disastrous level.

To get the process rolling, identify what can be done quickly.  Each manual process that your organization is performing that can be easily and cheaply automated will save time in the future.  As you implement more and more automation, you will begin to see results as operations become smoother.

I’m going to identify some low-hanging fruit that can be done in any company creating software and deploying it on a regular basis.  First, developers should always use version control.  There are many products that are available, including free versions that are very good (Bitbucket is one example that allows free private repositories).  By using either GitHub or Bitbucket you will get historical records on changes in your software and you’ll get off-site backup protection.  Disaster recovery just got a bit easier.

Once your software is regularly checked in by developers, then it’s time to try out some build systems.

The Build Server

Your first level of CI is to have a system that builds your software whenever a change is checked in or merged, then notifies all developers if a build is broken.  At this point, process and rules must be put in place to ensure that the build gets fixed right away.  The longer a build goes without being fixed, the more difficult it becomes to find the problem.  Some companies force developers to stay late to fix their build (since it can affect other developers), other companies have rules that allow the build to be broken for a maximum tolerable time.  This all depends on the company and the number of developers involved.

Once a build system is in place, your software should use unit tests to ensure a change in the software does not break a previously established feature.  The build server must run these unit tests every time the build is completed and the build should be rejected if the unit tests don’t pass.  This is also something that must be fixed by developers right away.

Many version control packages allow a pre-build check to be used.  As you assemble your CI environment, somewhere down the road you’ll want to incorporate some sort of pre-build system that doesn’t allow the software to be checked in unless it builds (and possibly passes the unit tests).  This will prevent change sets that are broken from being checked into your version control system.

Your next phase and probably a more difficult phase is to automate your deployment.  I’m assuming that you can acquire a development server or environment that mimics your production environment.  Once you scale up, you’ll need to add a quality environment for testing (QA) and some sort of staging environment.  Before you setup too many environments, you should get an automated deployment in place.  Jenkins is a good starting point though you can get by with just power shell or batch files.  Initially you can automate the process of preparing your deployment package and then manually switch out existing directories with the automatically prepared directories.  Once you are comfortable with that process you can automate the process of backing up the current environment and deploying the new one.  Keep in mind that you should have a roll-back mechanism that works quickly.  For a web server the process would look something like this:

  1. Create the directory with all the new files from your build server.
  2. Copy config files from the existing production environment.
  3. Stop the web server (in a web farm, perform this operation for one server at a time).
  4. Rename the existing website directory (give it a date so you can keep multiple backups if needed).
  5. Rename the new directory to the name that your web server expects.
  6. Start your web server.

For web farms, you can deploy your software in the middle of the day by adding steps to your process to take one server out of the farm (mark it unhealthy or disable it in your load balancer).  Wait for the traffic to bleed off, then shut the server down.  Continue with the directory switch.  Then start the server back up and put it back in the farm.  Then perform the same operation on the next web server, continuing through all servers in the farm.

To enhance this operation you can put a stop after the first web server and allow testing to be performed before authorizing the continuation of the deployment to other web servers in the farm.

Next you’ll need to make sure you have a roll-back plan.  Roll-back can be accomplished by stopping your web server, putting the old directory back and starting your web server.  This process should be tested on a test or development system first.  It needs to be perfected because you’ll need it when something bad happens.

Logging

If your software doesn’t log exceptions, then you’ll need to add some sort of catch-all logging (like ELMAH).  You should at least log errors and send it to a text file or to an email address.  Be aware that if you are adding logging to an old legacy system that has had no logging in the past, that your email system must either be robust enough to handle the load or you’ll need an email system that can dump older emails if the in-box is too full.  Otherwise, you’ll find yourself with buggy software and an email system that is down.   For text files, you’ll need to make sure that it is setup to roll-over (create a new file) when the text file gets too big.  Find out what your maximum file size is for the editor of your choice.  For products such as Notepad++, you’ll want to keep the files under 50 meg in size.  Sublime can handle larger files, but the file will load slowly as it approaches 500 megabytes in size.  You’ll also want to limit the number of roll-over files to prevent your hard drive from filling up and crashing your system.

Once you have established a logging system, analyze what errors are being logged.  Focus on the largest quantity of one type of error and fix the problem in your software.  Eventually, you’ll get into obscure errors that occur in situations where a web bot hits your website with incorrect parameters (or something of that nature).

Next, you’ll want to log events that occur on your APIs.  Logging such events can reveal aspects of your software that you never knew existed.  Scenarios where an object is null after a call to an API can occur if the parameters are unexpected.  These bugs can be fixed to prevent 500 errors from tying up your resources.  It can also help prevent memory leaks and stuck web server processes.

More Automation

The last aspect of CI that you should focus on involves tests such as load testing.  Load tests can be performed during off-hours or on a system that is isolated from your production system.  Code coverage can be performed to determine if your unit tests are adequate. Be aware that code coverage can be misleading due to the fact that there is a difference between a large quantity of poorly designed unit tests and a small quantity of well designed unit tests.

Integration testing can also be automated.  This is a complex subject in it’s own right, but there are ways to script a process to perform gets and posts on APIs and web sites to probe for 500 errors.  Any manual test that is performed more than once should become a candidate for automation.  Manual regression tests are time consuming (i.e. expensive) and prone to errors.  Your test suite should consist of a collection of small individual test packages that can be run separately or in parallel.  Eventually, you’ll get to a point where you can perform a full regression test at night and reduce the amount of manual testing to new systems or tricky sections of code.  This type of testing can be brittle if not done properly, so be aware that you will need to identify what can be automated and what will need to be done manually.

Repeat Often

One aspect of CI that is important is the theory of deploying small chunks often instead of large chunks of code rarely.  When you deploy software often (as in several times a week or even several times a day), you’ll gain confidence in your ability to deploy quality code.  The deployment process becomes automatic and your automated processes will become hardened and reliable.  Feed back from your logging should indicate if something went wrong (in which case you can roll back) or if your improvements have made an impact.  Feed back from your customers can also be addressed quickly since your turn-around time consists of analysis and programming followed by your automated testing and deployment process.  If your testing and deployment process takes a month, then developers will have forgotten a lot of information of what features they programmed by the time it’s time to deploy them.

Other Candidates for Automation

Resetting passwords of your databases used by your website should be easy to do.  Sometimes development continues at a fevered pace only to discover that two dozen (or more) databases are accessed from connection strings in various web.config files scattered on all different servers.  You should at least keep a list of where the passwords are stored so you can change them all quickly.

Third party connections can also fall into this category.  If you connect to an outside service, you should keep track of where that information is stored and how to change it.  It only takes one rouge programmer to make life miserable for a programming shop that consists of hundreds of config files scattered everywhere.  If you need to keep programmers out of your production system then you’ll need a method of changing passwords often (like once a month).

Finally

Be sure to click the “like” button if this information was helpful!