Homebrew Computers

Introduction

I’m going to switch it up a bit and talk about one of my other hobbies.  Electronics.  I haven’t worked with digital circuits in a while.  In fact it’s been so long that I had to do a lot of research to find out what is new the world of electronics.  Micro-controllers have come a long way.  Components have become dirt cheap over the years and way beyond the capabilities of my test equipment.  As I was looking around the world of technology, I stumbled across an article describing a guy who built a computer out of thousands of discreet transistors.  So that’s what I’m going to talk (or ramble) about in this article.

Megaprocessor

That’s the name of this homebrew computer system built by a guy named James Newman.  You can get to the website by clicking here: http://www.megaprocessor.com/.  

First I was intrigued by the fact that he built an entire system out transistors.  Not just transistors but NMOS transistors that are sensitive to static discharge.  I usually avoid these things, I have a difficult enough time building circuits out of TTL logic and NPN transistors.  However, if you want to build something out of a large number of transistors (like 27,000), you have to be conscious of power consumption and speed.  He has an entire story about his adventure with controlling static and burning out transistors.

As I dug through the website, I discovered that he built little circuits representing logic gates with these transistors, then he treated the circuits as components in a larger structure.  There is an LED on each input and output of every circuit, so it’s easy to visually verify and troubleshoot any hardware problems.  Here’s a sample picture of a 2-input AND gate:

2inputandgate

His website describes that he built this machine as a learning machine.  Anyone who wants to visually see what goes on inside the computer can see the LEDs light up as the program operates.  That is a really good idea.  I think every college should have one of these for their computer engineering class.  Unfortunately, due to maintenance costs and physical space, I don’t think too many colleges would be interested in setting one of these up.
 
In addition to the single logic gate boards, some boards contained repetitive circuitry consisting of many gates.  Those boards are diagrammed accordingly (also with LEDs on inputs and outputs).  Here’s an example of an 8-bit logic board:

logicboard

The next step up is the assembly of circuits into modules.  The connecting wires are diagrammed on the front of the board (see the red lines below) and the circuits are wired from behind.  Here’s a state machine module:

module


Here’s what one of these modules looks like from the backside:

modulebackside

The modules are mounted in frames which he has arranged in his living room (though he’s looking for a permanent public accessible location for the device).

As I mentioned before, you can follow the link and dig around through his website and learn all the fun details of how he built the machine, how long it took him and how much it cost.   For those of us who have worked in the electronics industry, his section called “Progress” has a lot of interesting stories about problems that he ran into not to mention the “Good, Bad & Ugly”.  This story made me cringe: Multiplexor Problem.  Unexpected current flow problems are difficult to understand and troubleshoot. 
 

So what’s the point?  

It’s a hobby.  The purpose is to built something or accomplish some task and stretch your abilities.  The goal is to experience what it would be like to construct such a machine.  Think of this as an advanced circuit building exercise.  

I’ve built microprocessor based circuits in the past (mentioned on my website:  http://www.decaire.net/Home/ComputerProgramming),
the megaprocessor is much more complex and more challenging than my project.  If you really want to learn how a computer operates, nothing compares to a project like this.  I have to warn readers that this is not something you jump into out of the blue.  If you have no electronics experience, start small.  I mean, really small.

I would start with a book like this:

make_electronics

You can find this book at Amazon or at this link: http://www.makershed.com/products/make-electronics-2ed.  The bookstore that I visited yesterday (Barnes & Noble) has this as well.  I browsed through a lot of the “Make:” series of books and they are very well organized.

You’ll need some basic supplies like a breadboard, wire, hand-tools, a volt meter (nothing fancy).  If you move up into faster digital circuits, or you dive into microcontrollers and microprocessors, you’ll need to invest in an oscilloscope.  This will probably be the most expensive piece of test equipment you’ll ever buy.  I still own an original Heathkit oscilloscope that is rated at up to 10Mhz.  If you understand CPU speeds, you’ll notice that this oscilloscope is not able to troubleshoot an i7 processor running at 4Ghz.  In fact, oscilloscopes that can display waveforms of that frequency are beyond most personal budgets of a hobbyist (I think that crosses over to the domain of obsessive).


Other Homebrew Systems

I spent some time searching the Internet for other homebrew computers and stumbled onto the  “Homebuilt CPUs WebRing.”  I haven’t seen a webring in a long time, so this made me smile.  There are so many cool machines on this list (click here).  There are a couple of relay machines, one in particular has video so you can see and hear the relays clicking as the processor churns through instructions (Video Here, scroll down a bit).  The story behind Zusie the relay computer is fascinating.  Especially his adventures in obtaining 1500 relays to build the machine (and on a budget).  I laughed at his adventures in acquiring and  de-soldering the relays from circuit boards that were built for telephone equipment.

There are a lot of other machines on this webring that are just as interesting.  Great stories, schematics, how to build their machine, etc.  The one machine that really got my attention was the Magic-1 (click here).  This is a mini-computer built by a guy named Bill Buzbee.  He has a running timeline documenting his progress in designing and building the computer.  Reading his notes on designing an emulator and then his issues with wire-wrapping really gives a good picture of what it takes to build a computer out of discreet logic.  Here’s a photo of the backside of the controller card:

controller_card


The final machine schematics are posted here.  He used a microprogrammed architecture.  Microprogrammed architecture is like building a computer to run a computer.  This is one of my favorite CPU designs which I learned about when I bought a book titled “Bit-slice Microprocessor Design”.  Coincidentally, this book is listed in his links page under “Useful books”.  You can still get this book as new or used.  I would recommend picking up a cheap used book from Amazon.  The computer discussed in this book is based on the AMD 2901 4-bit CPU, which is a bit-slice CPU.  Basically, you buy several of these CPUs and stack them in parallel to form a computer.  For a 32-bit CPU, you would buy 8 chips and wire them in parallel.  Unfortunately, AMD doesn’t manufacture these chips any more.  The book, however, is a good read.  He also has PDF postings of another book called “Build a Microcomputer” which is virtually the same book (go here, scroll down).


Building Your Own

If you’re looking to build your own computer, just to learn how they work, you can use one of the retro processors from the 70’s and 80’s.  These are dirt cheap, so if you blow one up by hooking up the wrong power leads, you can just grab another one in your box of 50 spare CPU’s.  On the simple side, you can use an 8085 (this is almost identical to the 8080, but doesn’t need an additional +12v power supply).  The 8080 CPU was used in the original Space Invaders arcade game (see Space Invaders schematics here).

The 6502 has a lot of information available since it was used by Apple and Commodore computer companies in their earliest designs.  The Z80 is like an souped up 8080 processor.  This CPU has index registers which makes it more flexible.  There are a lot of hobbyists who have built machine around the Z80, and quite a few arcade games were built with this CPU.  The Galaga arcade game used 3 Z80 CPUs to run the game.

I suspect that over time these CPUs will become difficult to find.  Jameco currently lists them as refurbished.  If you build a project around one of these CPUs, be sure and buy extra chips.  That way you’ll have spares if the supply chain runs dry.

If you’re more advanced, you can still buy 8088 CPU’s for $3.95 each at Jameco Electronics.  This is the CPU that the first IBM PC was based on.  At that price, you can get a dozen for under $50 and build a parallel machine.  This CPU can also address 1 Megabytes of memory (which is a lot for assembly language programming), is contained in a 40-pin chip format and there is a huge amount of software and hardware available for it.

If you’re not so into soldering, wire-wrapping or circuit troubleshooting, but would like to build a customized system, you can experiment with tiny computers like the Raspberry Pie or the Arduino or Beaglebone.  These devices are cheap and they have ports for network connections, USB devices, HDMI outputs, etc.  There are a lot of books and projects on the Internet to explore.

These are not your only choices either.  There are microcontroller chips that are cheap.  Jameco lists dozens of CPUs with built-in capabilities like this one: ATTINY85-20PU.  It’s only $4.49 and you can plug it into a breadboard.  


So Many Resources Available

My website doesn’t tell the whole story of my early days of building the 8085 computer board.  I’ve actually built 2 of these.  My first board was built somewhere around 1978.  At that time, I was a teenager and computers were so expensive that I didn’t own one.  So I was determined to build one.  I had an old teletype (donated by an electronic engineer that lived across the street from my family when I was younger).  I built my own EPROM programmer that required dip switch inputs (this was not a very successful way to get a program into EPROM memory).  After I graduated from High School, I joined the Navy and purchased a Macintosh computer in 1984.  I’m talking about THE Macintosh, with 128k of memory.  Before I was honorably discharged from the Navy, I upgraded my Mac several times and had a Mac Plus with 4 Meg of memory.  My old 8085 computer board was lost in one of many moves my parents and I made between 1982 and 1988, so I decided to reconstruct my computer board and that is the board pictured on my website.  I also constructed a better EPROM programmer with a serial connection to the Mac so I can assemble the code and send it to the programmer (I wrote the assembler and the EPROM burner program in Turbo Pascal).  All of this occurred before the World Wide Web and Google changed the way we acquire information.  Needless to say, I have a lot of books!

Those were the “good ole’ days”.  Now we have the “better new days”.  I own so many computers that I can’t keep count.  My primary computer is a killer PC with 500 Gigs of M.2 hard drive space (and a 4TB bulk storage SATA drive), 32 gig of memory and a large screen.  I can create a simulator of what I want to build and test everything before I purchase a single component.  I can also get devices like EPROM burners for next to nothing.  There are on-line circuit emulators that can be used to test designs.  I’m currently evaluating this one: Easy EDA.  There are companies that will manufacture printed circuit boards, like this one: Dorkbot PDX.  They typically charge by the square inch of board space needed.  This is nice, because I can prototype a computer with a wirewrap design and then have a board constructed and build another computer that will last forever.


Conclusion

If you’re bored and you’re looking for a hobby.  This is the bottomless pit of all hobbies.  There is no depth you can go that would conclude your knowledge.  You can always dig deeper and discover new things.  This is not a hobby for everyone.  This hobby takes a significant amount of patience and learning.  Fortunately, you can start off cheap and easy and test your interest levels.  Otherwise, you can read the timelines and blogs of those of us who build circuits and struggle with the tiny details of getting a CPU to perform a basic NOP instruction.  I like the challenge of making something work but I also like reading about other people who have met the challenge and accomplished a complex task.

Never stop learning!


 

Dot Net Core Project Renaming Issue

Summary

I’m going to quickly demonstrate a bug that can occur in .Net Core and how to fix it quickly.  The error produced is:

The dependency LibraryName >= 1.0.0-* could not be resolved.

Where “LibraryName” is a project in your solution that you have another project linked to.

Setup

Create a new .Net Core project and add a library to it named “SampleLibrary”.  I named my soluion DotNetCoreIssue01.  Now add a .Net Core console project to the solution and name it “SampleConsole”.  Next, right-click on the “references” of the console application and add Reference.  Click the check box next to “SampleLibrary” and click the Ok button.  Now your project should build.

Next, rename your library to “SampleLibraryRenamed” and go into your project.json file for your console and change the dependencies to “SampleLibraryRenamed”.  Now rebuild.  The project is now broken.

Your project.json will look like this:

projectjson


And your Error List box will look like this:

errorlist



How To Fix This

First, you’ll need to close Visual Studio.  Then navigate to the src directory of your solution and rename the SampleLibrary directory to SampleLibraryRenamed.  

Next, you’ll need to edit the src file.  This fill is located in the root solution directory (same directory that the src directory is located.  It should be named “DotNetCoreIssue01.sln” if you named your solution the same name as I mentioned above.  Look for a line containing the directory that you just renamed.  It should look something like this (sorry for the word wrap):

Project(“{8BB2217D-0F2D-49D1-97BC-3654ED321F3B}”) = “SampleLibraryRenamed”, “srcSampleLibrarySampleLibraryRenamed.xproj”, “{EEB3F210-4933-425F-8775-F702192E8988}”

As you can see the path to the SampleLibraryRenamed project is srcSampleLibrary which was just renamed.  Make that the same as the directory just changed: srcSampleLibraryRenamed

Now open your solution in Visual Studio and all will be well.

 

The Trouble with Legacy Code

It’s been a long time since I wrote about legacy code.  So I’m going to do a brain-dump of my experience and thoughts on the subject.

Defining Legacy Code

First, I’m going to define what I mean by legacy code.  Many programmers who have just entered the industry in the past 5 years or less view legacy code as anything that was written more than a year ago or code that was written in the previous version of Visual Studio, or the previous minor version of .Net.  When I talk about legacy code, I’m talking about code that is so old that many systems cannot support it anymore.  An example is Classic ASP.  Sometimes I’m talking about VB.Net.  Technically, VB is not a legacy language, sometimes it is.  In the context of VB.Net I’m really talking about the technique used to write the code.  My experience is that Basic is a language that is picked up by new programmers who have no formal education in the subject or are just learning to program for the first time.  I know how difficult it is to ween yourself off your first language.  I was that person once.  Code written by such programmers usually amounts to tightly coupled spaghetti code.  With all the accessories: no unit tests, ill defined methods treated like function calls, difficult to break dependencies, global variables, no documentation, poorly named variables and methods, etc.  That’s what I call legacy code.

The Business Dilemma

In the business world the language used and even the technique used can make no difference.  A very successful business can be built around very old, obsolete and difficult to work with code.  This can work in situations where the code is rarely changed, the code is hidden behind a website or the code is small enough to be manageable.  Finally, if the business can sustain the high cost of a lot of developers, QA and other support staff, bad code can work.  It’s difficult to make a business case for the conversion of legacy code.

In most companies software is grown.  This is where the legacy problem gets exponentially more costly over time.  Most of the cost is hidden.  It shows up as an increased number of bugs that occur as more enhancements are released (I’m talking bugs in existing code that was disturbed by the new enhancement).  It shows up as an increase in the amount of time it takes to develop an enhancement.  It also shows up as an increase in the amount of time it takes to fix a bug.

Regression testing becomes a huge problem.  The lack of unit testing means the code must be manually tested.  Automated testing with a product like Selenium can automate some of the manual testing, but this technique is very brittle.  The smallest interface change can cause the tests to break and the tests are usually too slow to be executed by each developer or to be used with continuous integration.  

What to do…

Add Unit Tests?

At first, this seems like a feasible task.  However, the man-hours involved are quite high.  First, there’s the problem of languages like Classic ASP.  Unit tests are just not possible.  For code written in VB.Net, dependencies must be broken.  The difficulty of breaking dependencies is that refactoring can be complicated and cause a lot of bugs.  It’s nearly impossible to make a business case to invest thousands of developer hours into the company product to produce no noticeable outcome for the customer.  Even worse, is if the outcome is an increase in bugs and down-time.  The opposite of what is intended.


Convert Code?

Converting code is also very hazardous.  You could theoretically hold all enhancements for a year, and throw hundreds of programmers at the problem of rewriting your system in the latest technology with the intent to deliver the exact user experience currently in place.  In other words, the underlying technology would change, but the product would look and feel the same.  Business case?  None.

In the case of Classic ASP there are a couple of business cases that can be made for conversion.  However, the conversion must be performed with minimal labor to keep costs down and the outcome must be for the purpose of normalizing your system to be all .Net.  This makes sense if your system consists of a mix of languages.  The downside of such a conversion is the amount of regression testing that would be required.  Depending on the volume of code your system contains, you could break this into small sections and attack it over time.

One other problem with conversion is the issue of certification.  If you are maintaining medical or government software that requires certification when major changes take place, then your software will need to be re-certified after conversion.  This can be an expensive process.


Replace when Possible?

This is one of the more preferred methods of attacking legacy code.  When a new feature is introduced, replace the legacy code that is touched by the new feature with new code.  This has several benefits: The customer expects bugs in new features and the business expects to invest money in a new feature.  The downside of using only this technique is that eventually, your legacy code volume will plateau because of web pages that are little used or are not of interest for upgrading (usually it’s the configuration sections that suffer from this).

A downside to this technique is the fact that each enhancement may bring new technologies to the product.  Therefore, the number of technologies used over time grows.  This can be a serious liability if the number of people maintaining the system is small and one or more decide to move on to another company.  Now you have to fill the position with a person that knows a dozen or more odd technologies or the person to be hired will need a lot of time to get up to speed.


The Front-End Dilemma

Another issue with legacy code that is often overlooked is the interface itself.  Over time, interfaces change in style and in usability.  Many systems that are grown end up with an interface that is inconsistent.  Some pages are old-school html with javascript, others use bootstrap and AngularJS.  Many versions of JQuery are sprinkled around your website.  Bundling is an add-on if at all.  If your company hires a designer to make things consistent looking, there is still the problem of re-coding the front-end code.  In Classic ASP, the HTML code is always embedded in the same source file as the Javascript and VB Script.  That makes front-end conversion into a level 10 nightmare!  .Net webpages are not picnic either.  In my experience VB.Net webpages are normally written with a lot of VB code mixed in the front-side code instead of the code-behind.  There are also many situations where code behind emits html and javascript to allow logic to decide which code to send to the customer’s browser.


The Database Dilemma

The next issue I want to mention is the database itself.  When Classic ASP was king in the world of developing Microsoft product based websites, the database was used to perform must of the heavy lifting.  Websites did not have a lot of power and MS SQL had CPU cycles that could be used for processing (most straight database functions tax the hard drive but leave the CPU idle).  So many legacy systems will have the business logic performed in stored procedures.  In this day and age, it becomes a license cost issue.  As the number of customers increase, instances of databases must increase to handle the load.  Web servers are much cheaper to license than SQL servers.  It makes more sense to put the business logic in the front end.  In today’s API driven environment, this can be scaled to provide CPU, memory and drive space to the processes that need them the most.  In legacy systems, the database is where it all happens and all customers must share the misery of one heavy-duty slow running process.  There is only one path for solving this issue.  New code must move the processing to a front-end source, such as an API.  This code must be developed incrementally as the system is enhanced.  There is no effective business case for “fixing” this issue by itself.

As I mentioned, a lot of companies will use stored procedures to perform their back-end processing.  Once a critical mass of stored procedures have been created, you are locked into the database technology that was chosen from day one.  There will be no cost effective way to convert an MS SQL database into Mongo or Oracle or MySQL.  Wouldn’t it have been nice if the data store was broken into small chunks hidden behind APIs?  We can all dream right?


The Data Center Dilemma

Distributed processing and scalability are the next issue that come to mind.  Scaling a system can consist of adding a load-balancer with multiple web servers.  Eventually, the database will max out and you’ll need to run parallel instances to try and split the load.  The most pain will come when it is necessary to run a second data center.  The decision to use more than one data center could be redundancy or it could be to reduce latency to customers located in a distant region.  Scaling an application to work in multiple data centers is no trivial task. First, if fail-over redundancy is the goal then the databases must be upgraded to a enterprise licenses.  Which increases the cost of the license, but also doubles that cost because the purpose is to have identical databases at two (or more) locations.

Compounding the database problems that will need to be solved is the problem of the website itself.  More than likely, your application that was “grown” is a monolithic website application that is all or nothing.  This beast must run from two locations and be able to handle users that might have data at one data center or the other.  

If the application was designed using sessions, which was the prevailing technology until APIs become common, then there is the session fail-over problem.  Session issues will rear their ugly head when a web-farm is introduced, but there are cheap and dirty hacks to get around those problems (like fixing the incoming ip to a web server to prevent a user from going to another web server after they log in).  Using a centralized session store is a solution to a web farm.  Another solution is the use of a session-less website design.  Adapting a session-based system to session-less is a monstrous job.  For a setup like JWT, the number of variables in a session must be reduced to something that can be passed to a browser.  Another method is to cache the session variables behind the scenes and pass a token to the browser that identifies who the user is.  Then the algorithm can check to see if the cache contains the variables that match the user.  This caching system would need to be shared between data centers because a variable that is saved from a web page would be lost if the user’s next request was directed to the other data center.  To get a rough idea of how big the multi-datacenter problem is, I would recommend browsing over this article: 

Distributed Algorithms in NoSQL Databases


The Developer Knowledge Dilemma

This is a really ugly problem.  Younger developers do not know the older languages and they are being taught techniques that require technologies that didn’t exist 10 years ago.  Unit testing is becoming an integral part of the development process.  Object oriented programming is used in almost all current languages.  This problem exposes the company to a shortage of programmers able to fix bugs and solve problems.  Bugs become more expensive to fix because only experienced programmers can fix them.  Hire a dozen interns to fix minor issues with your software?  Not going to happen.  Assign advanced programmers to fix bugs?  Epic waste of money and resources.  Contract the work to an outside company?  Same issues, expensive and difficult to find the expertise.


Conclusion

My take on all of this is that a company must have a plan for mitigating legacy code.  Otherwise the problem will grow until the product is too expensive to maintain or enhance.  Most companies don’t recognize the problem until it becomes a serious problem.  Then it’s somewhat late to correct and corrective measures become prohibitively expensive.  It’s important to take a step back and look at the whole picture.  Count the number of technologies in use.  Count the number of legacy web pages in production.  Get an idea of the scope of the problem.  I would recommend keeping track of these numbers and maybe compare the number of legacy pages to non-legacy pages.  Track your progress in solving this problem.

I suspect that most web-based software being built today will fall into the MVC-like pattern or use APIs.  This is the latest craze.  If developers don’t understand the reason they are building systems using these techniques, they will learn when the software grows too large for one data center or even too large for one web server.  Scaling and enhancing a system that is broken into smaller pieces is much easier and cheaper to do.

I wish everyone the best of luck in their battle with legacy code.  I suspect this battle will continue for years to come.

 

Dot Net Core

I’ve been spending a lot of time trying to get up to speed on the new .Net Core product.  The product is at version 1.0.1 but everything is constantly changing.  Many NuGet packages are not compatible with .Net Core and the packages that are compatible are still marked as pre-release.  This phase of a software product is called the bleeding edge.  Normally, I like to avoid the bleeding edge, and wait for a product to at least make it to version 1.  However, the advantages of the new .Net Core make the pain and suffering worth it.

The Good

Let’s start with some of the good features.  First, the dll’s are redesigned to allow better dependency injection.  This is a major feature that is long overdue.  Even the MVC controllers can be unit tested with ease.

Next up is the fact that dll’s are no longer added to projects by themselves and the NuGet package manager determines what your project needs.  I have long viewed NuGet as an extra hassle, but Microsoft has finally made this a pleasure to work with.  In the past, NuGet just makes version control hard because you have to remember to exclude the NuGet packages from your check-in.  This has not changed (not in TFS anyway), but the way that NuGet works with projects in .Net Core has changed.  Each time your project loads, the NuGet packages are loaded.  What packages are used is determined by the project.json file in each project (instead of the old nuget packages.config file).  Typing in a package name and saving the project.json will cause the package to load.  This cuts your development time if you need a package loaded into multiple projects.  Just copy the line of code in your project.json file and paste into other project.json files.

It appears that Microsoft is leaning more toward XUnit for unit testing.  I haven’t used XUnit much in the past, but I’m starting to really warm up to it.  I like the simplicity.  No attribute is needed on the unit class.  There is a “Theory” attribute that can feed inline data into a unit test multiple times.  This turns a unit test into one test per input set.

The new IOC container is very simple.  In an MVC controller class, you can specify a constructor with parameters using your interfaces.  The built-in IOC container will automatically match your interface with the instance setup in the startup source.

The documentation produced by Microsoft is very nice: https://docs.asp.net/en/latest/intro.html.  It’s clean, simple and explains all the main topics.

The new command line commands are simple to use.  The “dotnet” command can be used to restore NuGet packages with the “dotnet restore” command.  The build is “dotnet build” and “dotnet test” is used to execute the unit tests.  The dotnet command uses the config files to determine what to restore, build or test.  This feature is most important for people who have to setup and deal with continuous integration systems such as Jenkins or Team City.

The Bad

OK, nothing is perfect, and this is a very new product.  Microsoft and many third-party vendors are scrambling to get everything up to speed, but .Net Core still in the early stages of development.  So here is a list of hopefully temporary problems with .Net Core.

The NuGet package manager is very fussy.  Many times I just use the user interface to add NuGet packages, because I’m unsure of the version that is available.  Using a wild-card can cause a package version to be brought in that I don’t really want.  I seem to spend a lot more time trying to make the project.json files work without error.  Hopefully, this problem will be diminished after the NuGet packages catch up to .Net Core.

If you change the name of a project that another project is dependent on you’ll get a build error.  In order to fix this issue you need to exit from Visual Studio and rename the project directory to match and then fix the sln file to recognize the same directory change.

Many 3rd party products are do not support .Net Core yet.  I’m using Resharper Ultimate and the unit tests do not work with this product.  Therefore, the code coverage tool does not work.  I’m confident that JetBrains will fix this issue within the next month or two, but it’s frustrating to have a tool I rely on that doesn’t work.

Many of the 3rd party NuGet packages don’t work with .Net Core.  Fake it easy is one such package.  There is no .Net Core compatible package as of this blog post.  Eventually, these packages will be updated to work with Core, but it’s going to take time.

What to do

I’m old enough to remember when .Net was introduced.  It took me a long time to get used to the new paradigm.  Now there’s a new paradigm, and I intend to get on the band-wagon as quick as I can.  So I’ve done a lot of tests to see how .Net Core works and what has been changed.  I’m also reading a couple of books.  The first book I bought was the .Net Core book:

dotnetcorebook

This is a good book if you want to browse through and learn everything that is new in .Net Core.  The information in this book is an inch deep and a mile wide.  So you can use it to learn what technologies are available, and then zero in on a subject that you want to explore, then go to the Internet and search for research materials.

The other book I bought was this one:

proaspcorebook

This book is thicker than the former and the subject is narrowed somewhat.  I originally ordered this as an MVC 6 book, but they delayed selling the book and renamed it Core.  I’m very impressed by this book because each chapter shows a different technology to be used with MVC and there are unit tests with explanations for each.  So there is an application that the author builds throughout the book.  Each chapter builds on the previous program and adds some sort of functionality, like site navigation or filtering.  Then the author explains how to write the unit tests for these features in the chapter that contains the feature.  Most books go through chapter by chapter with different features, then there is a chapter on how to use the unit test features of a product.  This is a refreshing change from that technique.

I am currently working through this book to get up to speed as quick as possible.  I would recommend any .Net programmer to get up to speed on Core as soon as possible.

 

Unit Testing with Moq

Introduction

There are a lot of articles on how to use Moq, but I’m going to bring out my die roller game example to show how to use Moq to roll a sequence of predetermined results.  I’m also going to do this using .Net Core.

The Setup

My sample program is a game.  The game is actually empty, because I want to show the minimal code to demonstrate Moq itself.  So let’s pretend there is a game object and it uses a die roll object to get a random outcome.  For those who have never programmed a game before, a die roll can be used to determine offense or defense of one battle unit attacking another in a turn-based board game.  However, unit tests must be repeatable and we must make sure we test as much code as possible (maximize our code coverage).

The sample project uses a Game object that is dependent on the DieRoller object.  To break dependencies, I required an instance of the DieRoller object to be fed into the Game object’s constructor:

public class Game
{
    private IDieRoller _dieRoller;

    public Game(IDieRoller dieRoller)
    {
        _dieRoller = dieRoller;
    }

    public int Play()
    {
        return _dieRoller.DieRoll();
    }
}

Now I can feed a Moq object into the Game object and control what the die roll will be.  For the game itself, I can use the actual DieRoller object by default:

public static void Main(string[] args)
{
    var game = new Game(new DieRoller());
}

An IOC container could be used as well, and I would highly recommend it for a real project.  I’ll skip the IOC container for this blog post.

The unit test can look something like this:

[Fact]
public void test_one_die_roll()
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(2);

	var game = new Game(dieRoller.Object);
	var result = game.Play();
	Assert.Equal(2, result);
}

I’m using xunit and moq in the above example.  So for my .Net Core project.json file:

{
	"version": "1.0.0-*",
	"testRunner": "xunit",
	"dependencies": {
		"DieRollerLibrary": "1.0.0-*",
		"GameLibrary": "1.0.0-*",
		"Microsoft.NETCore.App": {
			"type": "platform",
			"version": "1.0.1"
	},
	"Moq": "4.6.38-alpha",
	"xunit": "2.2.0-beta2-build3300",
	"xunit.core": "2.2.0-beta2-build3300",
	"dotnet-test-xunit": "2.2.0-preview2-build1029",
	"xunit.runner.visualstudio": "2.2.0-beta2-build1149"
},

"frameworks": {
	"netcoreapp1.0": {
		"imports": "dnxcore50"
		}
	}
}

 

Make sure you check the versions of these packages since they are constantly changing as of this blog post.  It’s probably best to use the NuGet package window or the console to get the latest version.

Breaking Dependencies

What does Moq do?  Moq is a quick and dirty way to create a fake object instance without writing a fake object.  Moq can take an interface or object definition and create a local instance with outputs that you can control.  In the XUnit sample above, Moq is told to return the number 2 when the DieRoll() method is called.  

Why mock an object?  As you create code, you’ll end up with objects that call other objectsThese cause dependencies.  In this example, the Game object is dependent on the DieRoller object:

 

Each object should have it’s own unit tests.  If we are testing two or more objects that are connected together, then technically, we’re performing an integration test.  To break dependencies, we need all objects not under test to be faked or mocked out.  If the Game object has multiple paths (using if/then, case statements for example) that depend on the roll of the die, then we’ll need to create unit tests where we can fix the die roll to a known set of values and execute the Game object to see the expected results.

First, I’m going to add a method to the Game class that will determine the outcome of an attack.  If the die roll is greater than 4, then the attack is successful (unit is hit).  If the die roll is 4 or less, then it’s a miss.  I’ll use true for a hit and false for a miss.  Here is my new Game class:

public class Game
{
    private IDieRoller _dieRoller;

	public Game(IDieRoller dieRoller)
	{
		_dieRoller = dieRoller;
	}

	public int Play()
	{
		return _dieRoller.DieRoll();
	}
	 
	public bool Attack()
	{
		if (_dieRoller.DieRoll() > 4)
		{
			return true;
		}
		
		return false;
	}
}


Now if we define a unit test like this:

[Theory]
[InlineData(1)]
[InlineData(2)]
[InlineData(3)]
[InlineData(4)]
public void test_attack_unsuccessful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.False(result);
}


We can test all instances where the die roll should produce a false result.  To make sure we have full coverage, we’ll need to test the other two die results (where the die is a 5 or a 6):

[Theory]
[InlineData(5)]
[InlineData(6)]
public void test_attack_successful(int dieResult)
{
	var dieRoller = new Mock();
	dieRoller.Setup(x => x.DieRoll())
	.Returns(dieResult);

	var game = new Game(dieRoller.Object);
	var result = game.Attack();
	Assert.True(result);
}

Another Example

Now I’m going to make it complicated.  Sometimes in board games, we use two die rolls to determine an outcome.  First, I’m going to define an enum to allow three distinct results of an attack:

public enum AttackResult
{
	Miss,
	Destroyed,
	Damaged
}


Next, I’m going to create a new method named Attack2():

public AttackResult Attack2()
{
	if (_dieRoller.DieRoll() > 4)
	{
		if (_dieRoller.DieRoll() > 3)
		{
			return AttackResult.Damaged;
		}
		return AttackResult.Destroyed;
	}
	return AttackResult.Miss;
}


As you can see, the die could be rolled up to two times.  So, in order to test your results, you’ll need to fake two rolls before calling the game object.   I’m going to use the “theory” XUnit attribute to feed values that represent a damaged unit.  The values need to be the following:

5,4
5,5
5,6
6,4
6,5
6,6

Moq has a SetupSequence() method that allows us to stack predetermined results to return.  So every time the mock object is called, the next value will be returned.  Here’s the XUnit test to handle all die rolls that would result with an AttackReuslt of damaged:

[Theory]
[InlineData(5, 4)]
[InlineData(5, 5)]
[InlineData(5, 6)]
[InlineData(6, 4)]
[InlineData(6, 5)]
[InlineData(6, 6)]
public void test_attack_damaged(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Damaged, result);
}

Next, the unit testing for instances where the Attack2() method returns a AttackResult  of destroyed:

[Theory]
[InlineData(5, 1)]
[InlineData(5, 2)]
[InlineData(5, 3)]
[InlineData(6, 1)]
[InlineData(6, 2)]
[InlineData(6, 2)]
public void test_attack_destroyed(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Destroyed, result);
}

And finally, the instances where the AttackResult is a miss:

[Theory]
[InlineData(1, 1)]
[InlineData(2, 2)]
[InlineData(3, 3)]
[InlineData(4, 1)]
public void test_attack_miss(int dieResult1, int dieResult2)
{
	var dieRoller = new Mock();
	dieRoller.SetupSequence(x => x.DieRoll())
	.Returns(dieResult1)
	.Returns(dieResult2);

	var game = new Game(dieRoller.Object);
	var result = game.Attack2();
	Assert.Equal(AttackResult.Miss, result);
}

In the instance of the miss, the second die roll doesn’t really matter and technically, the unit test could be cut back to one input.  To test for every possible case, we could feed all six values into the second die.  Why would be do that?  Unit tests are performed for more than one reason.  Initially, they are created to prove our code as we write it.  Test-driven development is centered around this concept.  However, we also have to recognize that after the code is completed and deployed, the unit tests become regression tests.  These tests should live with the code for the life of the code.  The tests should also be incorporated into your continuous integration environment and executed every time code is checked into your version control system (technically, you should execute the tests every time you build, but your build times might be too long to do this).  This will prevent future code changes from accidentally breaking code that was already developed and tested.  In the Attack2() method a developer could enhance the code to use the second die roll when the first die roll is a 1,2,3 or 4.  The unit test above will not necessarily catch this change.  The only thing worse than a broken unit test is one that passes when it shouldn’t.

With that said, you should not have to perform an exhaustive test on every piece of code in your program.  I would only recommend such a tactic if the input data set was small enough to be reasonable.  For the example case above, the die size is 6 and the “Theory” attribute cuts the code you’ll need in order to perform multiple unit tests.  If you are using Microsoft Tests, then you can setup a loop that does the same function as the “Theory” attribute and test all iterations for one expected output in each unit test.


Where to get the Sample Code

You can download the sample code from my GitHub account by clicking here.