Learning Unity (Part 3)

The Cube

I’m still digging around Unity and the Descent I source code. You can find the previous posts here: Part 1, Part 2. As I was attempting to assign textures to each side of the cubes in Descent, I decided I needed to get down to the bare minimum code to figure out how it can be done.

sA long time ago (around the mid 1990’s) I spent some time writing 3D code to render a solid object on the PC. This was before graphics cards were king and 3D graphics were done using a mixture of assembly, some C tricks and fixed-point arithmetic. Part of the issue in those days was the lack of floating point in hardware. Math co-processors were available, but almost nobody had them. Games like Descent I were programmed using fixed-point arithmetic because the numbers can be stored as a 32-bit integer, added, subtracted, etc. and then converted into a float when needed. Cosine and Sine were computed using a lookup table. A lot of accuracy was sacrificed for speed. If you obtain a copy of Descent I and play it now, you will notice that sometimes the polygons on the walls will pop-out or in a little as you’re passing by. That’s due to inaccuracies in the computations.

3D processing can also be sped up by clever storage techniques. In the early days, the vertexes were all shared by the polygons that represented them. This technique is still done with graphics cards using fans and ladders of triangles. The reason to share the vertices is so that the entire scene can be rotated and translated using the minimum number of vertices to compute. Then the polygons that are connected to those points can be rendered.

What I discovered in Unity is that a Mesh contains one texture map that will “wrap” the entire 3D object. This makes sense and it’s easy to accomplish in programs like Blender. I discovered this when I downloaded and recreated this textured cube sample:

http://ilkinulas.github.io/development/unity/2016/04/30/cube-mesh-in-unity3d.html

One detail to note: If you decide to make the cube in that article work, make sure you create a game object with a MeshFilter and a MeshRenderer. If you create the code first, then attach the code to a game object, the mesh filter and renderer will get automatically created. The link above will take you to part 1 of his article and part 2 has the texture rendering. You can right-click on his image and use that for the texture map. He explains how it wraps and it works as advertised. One of the better written articles on creating a texture mapped cube in Unity.

In my Descent level program I had one mesh that was composed of all the cubes read from the HOG file. That was a huge mesh and the only way to texture map it is to create some massive texture that wraps all the tunnels. Not what I was looking for. I want the ability to render one texture on each side.

It appears that the Unity mesh structure requires independent vertices for each texture (can’t share a vertex). I discovered this when I tried to assign UV’s for vertices for each face. In other words, I had a vertex in my test cube that was shared between three faces. That means that I had 8 vertices for my cube and then I needed 6×4 = 24 UV’s. Unity doesn’t like more UV’s than vertices. So I needed to break my cube mesh into side meshes. One mesh per side. Then I can assign a texture to each side. That means that there are vertices that overlap. Which is OK. Descent I is an older game and the levels don’t contain enough vertices to overwhelm any of today’s graphics cards. I’m not looking for efficiency anyway, just trying to see if I can make it work.

You can go to my GitHub account (here) and download the source for my multi-textured cube. You can also look through my check-ins and see how I started to experiment with a cube that had 8 vertices and refactored it until I got it to work with independent textures for each side and I also created the meshes, mesh filters and mesh renderers. This was the information I needed to programmatically generate a Descent level in my Unity viewer program.

Here’s a sample of the cube that I generated:

Multi-textured Cube

You might notice that the word “Exit” is mirrored. That is just an adjustment in the UV coordinates. A simple fix from this:

    Vector2[] uvs0 =
    {
        new Vector2(0, 0),
        new Vector2(1, 0),
        new Vector2(1, 1),
        new Vector2(0, 1),
    };

To this:

    Vector2[] uvs0 =
    {
        new Vector2(0, 0),
        new Vector2(0, 1),
        new Vector2(1, 1),
        new Vector2(1, 0),
    };

Resulting in this:

The UV’s will eventually be replaced with the UV numbers provided in the HOG file. Hopefully, the UV coordinates will work the same as they do in Unity. Otherwise, I’ll have to write a translator to convert from Descent UV’s into Unity UV’s. I’ll discover this when I compare the Descent level editor with my Unity level viewer and it will only be noticeable on textures like doors and signs.

To form the cube, I drew a diagram of the unwrapped cube:

Unwrapped Cube

I ordered the faces the same order used by Descent. You’ll notice that the vertex numbers are ordered by faces. I also numbered the vertices in a clock-wise orientation. This will cause the normals to be computed to point outward. Normals? In case you’re unfamiliar with this term, it is a vector that is computed to point 90 degrees from the face of a polygon. This is used to determine polygon clipping and directed light levels. Unity computes the normal vectors automatically, after the triangles have been defined. The ordering of your vertices in a triangle makes a difference. If you order your triangles in the opposite direction, you’ll see through the outer layer of your cube and into the cube itself. Like this:

Cube with polygons pointing inward

Eventually, this is what w really need because our ship will be inside the tunnels and the walls need to face inward. If you want walls on both sides, you will need to put another texture-mapped polygon pointing the other way on the same coordinates. If you look at the unwrapped cube diagram, you’ll also notice the dashed lines. That is to show the two triangles that make up each side. I used that to determine which points would form each triangle. Computing a normal point is easy (assuming you wanted to do it manually), just take the dot-product of the two lines forming two sides of a polygon. For a flat polygon, it doesn’t really matter which corner you choose.

Polygon with a Normal Vector

In the early days of computing normals, the normal vector would be pre-computed and then stored with the polygon. Then the vector would be rotated and translated with the polygon points, eliminating the dot-product calculation for each polygon for every frame rendered.

Triangles

Another fact about computer graphics, which you might be scratching your head over, is that graphics cards handle only triangles. Why only triangles? Every polygon can be divided into triangles and triangles are the smallest polygon possible. The real reason though is that a triangle is always a valid 2D polygon in a 3D space. If you form a polygon from four points (or more), there are invalid polygons, like this one:

The top of the polygon is a reddish brown color and the bottom is gray. The side view reveals that the polygon is twisted in 3D space. This cannot happen to a triangle, because three points are the minimum number of points to determine a plane in 3D space.

Textures and Materials

Unity is much more capable than the graphic engine of Descent. That’s because computer graphics have come a long way and it’s all done in hardware now. In Descent, there are just textures that are transferred onto a polygon surface. Graphics cards can perform other operations on a polygon surface, such as bump mapping, transparency, etc. In Unity, the material object comes first, then a texture is assigned to the material. You can change the shader assigned to the material, but the only shader that is initially loaded is the “Diffuse” shader. That is the shader I used inside my AddWall() method. You can load and create different shaders and obtain a shiny surface or a semi-transparent surface. This can be used to create a dirty glass window or a chrome plated object in your game.

Where to get the Code

To download the sample code for this blog post you can go to my GitHub account here. I would encourage you to download my sample code and change variables and experiment and discover how Unity uses textures. The sample code only contains the assets, so you’ll need to create a new Unity project, then copy my GitHub code into the assets directory. Then you’ll need to create a Game Object and assign the Main.cs as the script to the GameObject like this:

1 thought on “Learning Unity (Part 3)

Leave a Reply