Tuesday, August 1, 2017

Inti-Creates: A Look at the Evolution of the Mega Man Zero engine

Introduction

In 1996, Japanese video game company Inti Creates was formed. Created by ex-Capcom employees, Inti Creates started out by making games exclusively for Sony's PlayStation, being funded by Sony Music. In 1998, they released Speed Power Gunbike, a third person action game. In 1999, they released Love & Destroy. Both of these games did not turn a profit, and soon after, Sony cut ties with the company. In 2000, company vice president Yoshihisa Tsuda spoke to Keiji Inafune, the producer and "father" of the Mega Man series, voicing an interest in making a Mega Man game of their own. Soon after, development of Mega Man Zero began, being made for Nintendo's new portable, the Game Boy Advance.

The game was a success. Video game review sites such as IGN gave the game a 8.8 out of 10. In the first week of its release, the game sold over 66,000 units in Japan. By the end of 2002, it sold over 250,000 units in Japan alone. The sales of the game warranted a sequel, and in less a year, Mega Man Zero 2 was released. This would continue for years, with a new Mega Man Zero game releasing every year. In 2006, Mega Man ZX was released, a new game series that takes place 200 years after the events of Mega Man Zero 4. This game was released for the then brand new Nintendo DS.

If it is not already clear, I adore the Mega Man Zero and Mega Man ZX games. This blog post will cover the technical aspects of the engine which makes these games possible, and what changes were made with each title.

Mega Man Zero 1


Hardware


Mega Man Zero began development not long after the release of Mega Man X5. X5 was developed on the Playstation, as was many Mega Man titles during that time period. Allowing for CD-quality audio, a high resolution 320x240 display, and a considerable amount of video memory for storing graphics, the Playstation hardware was considerably more powerful than the GBA. Because of this, certain compromises were made while developing Zero 1.

The screen resolution of the GBA was only 240x160 pixels. To avoid screen crunch, an issue where on-screen objects take up too much screen space, the resolution of game objects had to be reduced. The GBA relied on cartridges, which had considerably less storage than the CD-ROMs of the PlayStation. Where Mega Man X5 could take advantage of over 300 megabytes of data, Zero 1 was limited to only 8 megabytes. As such, full motion video, Redbook music, and voice acting all had to be excluded. For music, MMZ makes use of sequenced "MIDI" tracks, and short, low quality ADPCM audio clips for instrument samples and sound effects. The PlayStation had 1 megabyte of video memory, where the GBA only had 96 kilobytes. The GBA also only had a little over 256 kilobytes of main memory, where the PlayStation had 2 megabytes. With less than a tenth of the memory of the PlayStation, Mega Man Zero 1 opted for less complex graphics. Levels would consist of small, repeating, backgrounds. This would reduce the amount of necessary memory but would lower the overall detail of scenery.

Graphics


Most sprite data is stored as 4 bits per pixel images. All of this data is stored at the end of the ROM, packed together like a sprite sheet. Depending on the intended use of the graphics data, it may be organized in tiles of varying sizes. For example, stage tiles would typically be stored as 8x8 tiles. A table stored in the stage file would take four of these tiles and create a 16x16 "meta-tile", which would be used to represent a portion of the stage.

Some data is compressed using the LZ77 algorithm. The GBA provides relatively high-performance routines for decompressing LZ77 data, stored inside the BIOS. However, a majority of the graphics data is left uncompressed. This is an odd choice, as with the very limited storage on cartridges, taking advantage of compression seems only logical. One could argue that the extra CPU resources required to decompress the data would be too much for a real time environment. While this may be true, there is plenty of data which does not need to be decompressed every frame. Enemy and stage graphics are loaded all at once. Decompressing this data should not cause performance issues. Furthermore, Mega Man Zero 1 makes use of almost all of the 8 megabyte cartridge. With only a handful of kilobytes left unused, it baffles me that they would not make use of compression when they certainly needed it.

Audio


As was the case with many games on the GBA, Mega Man Zero makes use of the Nintendo M4A (Music For Advanced Game Boy) audio library. This library was included with the official GBA software development kit. Not only is music handled with M4A, but also sound effects. The GBA has two channels specifically for PCM audio, Direct Sound 1 and Direct Sound 2. M4A takes all sounds that are playing and mixes them down, before sending them to these channels. Oddly enough, MMZ only makes use of Direct Sound 1. This could be due to the amount of CPU resources necessary for M4A to run. Unlike other audio solutions of the time, M4A runs purely in software. No special hardware exists to assist in the synthesis of audio. This has the unfortanate side effect of limiting the amount of audio samples that can be mixed in a given frame. If too many samples are present, M4A will not be able to mix them all in time, and the audio which is outputted to the GBA will be distorted. Due to these limitations, in-game music uses only a few channels for instruments. Additionally, ambient sounds, such as wind in an outdoor area, are not used.

Text

Text data is embedded in the ROM, uncompressed, in a giant table. All text pieces are packed side by side, making it very easy to dump the text for analysis. However, unlike the PlayStation Mega Man games, text is not encoded in ASCII. Instead, text uses a custom encoding that is tailored towards how the game's font is loaded into the GBA's video memory. This encoding is a stripped down version of ASCII, only containing uppercase letters, lowercase letters, and a few characters for punctuation.

Stages/Levels


Much like other Mega Man titles, stages consist of scenes shown from a side perspective. The Game Boy Advance provides four different background layers. MMZ uses one for the HUD, and the other three for stage components. Most stages make use of parallax scrolling, where different background layers scroll at different speeds to give the illusion of depth. Due to the limited number of background layers, there can only be two parallax backgrounds. Of the four available layers, one is reserved for the HUD, and another is always used for foreground tiles, which make up the layout of the stage.

Mega Man Zero 2, 3, and 4

After the success of Zero 1, Inti Creates made 3 more Zero games. Each one of these games was built in no more than a year, and as such, not much was changed from each title. In fact, all of these games run on the same engine, with only minor changes being made from title to title. For this reason, I will mainly cover gameplay changes.

Mega Man Zero 2 released in 2003. Taking off a year where Zero 1 left off, it features new levels, abilities, and bosses. Zero 2 is built off of the Zero 1 engine. As such, the systems from the prior game are practically identical in this one.

In Zero 2, a new multiplayer mode is added. In such mode, you compete against another player to get the highest score within a time limit. This mode makes use of the Link Cable, an addon for the GBA which allows two systems to be connected to each other.

Zero 3 removes the multiplayer mode, but adds a set of mini games, which are unlocked when the player completes the main game. Not much more can be said about this title.

Zero 4 is by far the most unique of the titles. As it was the last Zero game, Inti Creates opted to work with a 16 megabyte cartridge, doubling the available storage. The game takes full advantage of this. More instrument samples are incorporated for songs, scenery in stages are more detailed, and overall the game feels more polished. It seems that Inti Creates wished to end the series with a bang, and to that end they did not fail.



Mega Man ZX


By the time that Mega Man Zero 4 came out, the GBA was reaching the end of its life cycle. Nintendo had moved on to their new handheld, the Nintendo DS. For these reasons, Inti-Creates migrated to the new hardware, and they brought the Mega Man Zero engine with them. Mega Man ZX takes the MMZ engine and updates it to make full use of the DS hardware. Subsequently, the game has a familiar yet fresh feel to it.

One of the first changes one will notice when playing ZX is the screen size. The DS supports a native resolution of 256x192, compared to the GBA's 240x160. This allows for more objects to be visible on the screen at once.

The audio system in the game also received an upgrade. While sounds are still synthesized by the CPU, the DS comes with two processors, both of which are more powerful than the single one on the GBA. Furthermore, the DS has more bandwidth than the GBA, allowing for more data to be transferred from one system component to another. These factors allow for considerably superior audio quality compared to the GBA. Not only are higher quality samples used, but more audio channels are available, allowing for more complex music and sound effects.

Mega Man ZX Advent


Much like the MMZ sequels, not much was changed with ZX Advent, with one major exception. Where the first game rendered game objects as sprites, Advent throws away this system in favor of a 3D renderer. While the game for the most part is still 2D, sprites are rendered as 3D "quads", which always face the game camera, giving the illusion that they are flat. This was done for multiple reasons. For one, the "2D engine" present on the DS can only render up to 128 sprites, while the "3D engine" can render up 1,536 sprites. Furthermore, ZX Advent makes use of 3D effects in numerous parts of the game. For example, when a boss is defeated, a three-dimensional, spherical explosion is rendered on screen.

Tuesday, July 25, 2017

Game Maker Studio

Introduction


Making video games is hard. With programming, art, sound design, level design, and more, development of games can take months or years, depending on the scope of the game. With these requirements comes a need for tools to make development easier. One such tool is Game Maker. Developed in 1995, Game Maker provides an entire suite of tools for development of 2D games. Game Maker has been used in hundreds of games, both commercial and hobbyist. Since its release, it has gone through many iterations. This post will focus on the most recent version, Game Maker Studio 2. However, the information presented can apply to Game Maker: Studio and Game Maker 8.

Code Editor

Game Maker Studio uses a programming language called Game Maker Language (GML). GML has a syntax that is very similar to C, but has some differences to simplify its use. For example, GML can not store multiple functions in one source file. Each source file is a function in itself, with the name of the function equal to the name of the source file. Second, low level features in C, such as pointers and memory allocation, are mostly abstracted away in GML. To access data such as assets and game objects, one must use a set of pre-built functions. One downside to GML is the lack of data structure customization. Unlike C, one cannot create their own data structure with the struct keyword. The best that can be done is to create game objects and fill them with variables, with each game object acting as a data structure. This can affect performance if there are too many objects, however. While GML has its limitations, it is also easy to pickup. I would recommend it for those learning to program.

Room Editor

The room editor can be used for creating levels. Each room consists of multiple "layers". Each layer can contain a certain type of data. This includes tiles, backgrounds, and game objects. Tiles are repeatable sets of graphics which can be used to create the layout of a level. For example, in the image above, the ground and ceiling are made using tiles. Backgrounds, as the name suggests, are images that are displayed in the background, typically behind tiles. Backgrounds can be optionally repeated. Game objects, as the name suggests, are individual entities that can be placed on the map. The player and all enemies are game objects.

Sprite Editor

The sprite editor is used for creating images and animations. Much like tools such as Microsoft Paint, it provides a set of brushes. A pencil brush is used for plotting 1 pixel at a time. A paint bucket brush is used to fill an entire space with a set color. An eraser tool is used for removing pixels at a given point. A palette is provided for quick color selection. While a default palette is provided, it can be customized with any color within the 24-bit RGB space. I would argue that this tool is more useful than MS-Paint. Unlike Paint, Game Maker's sprite editor allows for creating animations, and can apply effects such as hue shifts.

Conclusion

Whether or not Game Maker Studio 2 is right for you depends on your needs. GMS2 is great for beginners. Its simple tools make it easy to pick up and prototype gameplay ideas. Where it lacks, however, is in its features. It is designed with 2D games in mind, where its 3D support is practically non-existent. Furthermore, more advanced programming features, such as data structures, are not supported. For myself, I use GMS2 with a team who is most comfortable with the software. If I were to work on a personal project however, I would rather make use of Unity.

Monday, July 24, 2017

Unity 3D

Introduction


Making video games is hard. With programming, art, sound design, level design, and more, development of games can take months or years, depending on the scope of the game. With these requirements comes a need for tools to make development easier. One such tool is Unity 3D. Developed in 2005, Unity aims to make development of both 3D and 2D titles easier. It is safe to say that it has had a major influence on the game industry, especially for people getting into game development.

The Good

Unity supports a wide variety of platforms. Games are written once, and can be exported to Windows, Mac, Linux, Android, IOS, Xbox One, PS4, Nintendo Switch, and more. This makes porting, which usually takes months, a breeze for developers. All low level aspects of each platform are handled by Unity internally, including rendering, input, and memory management. I personally use Unity for a variety of projects, including a mobile game I released some time ago.

Making a game with Unity is relatively simple, compared to other engines. Every piece of the game, whether it be characters, UI, sound, or otherwise, is madeup of "Game Objects". These objects exist in 3D space in the "Scene". Game Objects can have "Components" attached to them, which give them certain functionality. For example, a character in the game will have a 3D Mesh Renderer component added to its game object. Many other engines do not do this, and instead have all code attached to the game object itself, making things less modular. In addition, users can create their own custom components. This is where programming comes in. Users can code components in either C# or Javascript. Custom components can do just about anything, whether it be AI or the controls of the player.

The Bad

Unity's ease of use comes at a price. Over the past several years, many half-baked shovel ware titles have created using Unity. These titles hack together assets and code, and generally have very little effort put into them. Because of the number of these titles, Unity has to an extent become associated with shovel ware. Many people who are uneducated on the topic believe that Unity itself is a poor engine. This simply is not the case. There are plenty of good titles made in Unity, such as Ori and The Blind Forest, Aragami, and Hearthstone. The ironic thing about the Unity engine is that the free version of the engine requires a "Made with Unity" splash screen to be displayed on startup, while the payed version used by professionals does not. This means that good games made in Unity often do not make it clear that they are using Unity, while shovel ware almost always does.

Tuesday, July 18, 2017

Why Assembly Still Matters in 2017

In the 1970s and 1980s, computers had very limited resources. Processors were slow compared to today's offerings, often clocking at no more than 1 Mhz. RAM was limited to only a handful of kilobytes, compared to the multiple gigabytes found in even entry-level computers. Because of these limitations, high level programming languages such as C were often not efficient enough to accommodate the hardware they were intended for. Instead, many programmers resorted to writing their code in assembly. This allowed for code to be fine-tuned for a specific CPU architecture, but was tedious to work with. As computers got faster, the need for assembly dwindled. Computers could handle higher level languages, and the compilers for these languages were becoming smarter, allowing for better optimization of code. While assembly is not used today as frequently as it was 30 years ago, I believe that it still has its uses.

While high level programming languages such as C++ are common for building programs today, that code still needs to be converted to assembly by the compiler. This is a necessity, as computers only understand how to read assembly. A C++ compiler will try it’s best to produce efficient assembly output from source code. However, a compiler is only so good. Sometimes the output generated by a compiler is not as efficient as it could be. Going back to a previous post I made, SIMD and Multi-Threading, I took a look at a video series made by a Youtube user named Bisqwit. For those who have not read the post, in the series, Bisqwit creates a program which produces fractals and goes over many different means of optimizing the program for better performance. In one of the videos, Bisqwit finds that the compiler does not make optimal use of SIMD operations. He comes to this conclusion by disassembling his program and observing the assembly code that was produced by the compiler. Only after finding out about this issue does Bisqwit make changes to this code to produce the proper SIMD instructions. 

In a case like this, assembly is useful for analyzing performance issues in a CPU intensive application. While handwritten assembly code was not produced, certain commands to the compiler were issued to "hint" it on how to produce better assembly code. In a future post, I will cover the other uses of assembly programming.

Monday, July 17, 2017

Programming the Nintendo 64 Reality Co-Processor

Introduction

The Nintendo 64 was released in 1996 as Nintendo's latest console. Taking advantage of then-cutting edge hardware, it consisted of a MIPS R4300i processing unit and a multi-purpose co-processor. The Reality Co-Processor, named after the console's code name, Project Reality, handles not only the rendering of polygons, but also can be programmed to handle the processing of tasks such as audio synthesis.

Getting Started

This post makes the assumption that the user is familiar with MIPS assembly programming. Due to the very strict memory limitations of the RCP, coding in pure assembly will be covered in favor of C. One will need an assembler targeting the R4300i architecture. I would suggest downloading the GNU Assembler for this purpose. 

Entry Point

Unlike the main processor, the RCP does not access instructions from main memory. Instead, the RCP contains a dedicated 16 kilobytes of instruction and data memory, referred to as IMEM and DMEM, respectively. This is not very much memory to work with, even given the standards of the time, so applications which utilize the RCP must execute code in micro-tasks. This is exactly what we will do.

The RCP contains a MIPS processor which is not very different from the main CPU. MIPS code is uploaded to the RCP, a flag is set which begins execution, and when complete, a flag is set to denote when it is done. It should be noted that the RCP can not access main memory. All operations executed by the RCP are restricted to IMEM and DMEM. However, data can be transferred in and out of these memory regions through the use of Dynamic Memory Access (DMA).

To begin, here is the code that will be executed by the RCP.


loop:
nop
j loop

This code does nothing but loop infinitely. To get this code into IMEM, we must perform a DMA transfer. This is done by writing parameters to specific memory addresses. The following addresses are used for this purpose.
  • SP_MEM_ADDR_REG (0xA4040000) - Where you want to copy to
  • SP_DRAM_ADDR_REG (0xA4040004) - Where you want to copy from
  • SP_RD_LEN_REG (0xA4040008) - The size of the data to copy
The following code copies our code to the RCP through DMA.


la t0,0xA4040000  ; SP_MEM_ADDR_REG
li t1,0x1000
sw t1,0(t0)
la t0,0xA4040004 ; SP_DRAM_ADDR_REG
la t1,loop
sw t1,0(t0)
la t0,0xA4040008 ; SP_RD_LEN_REG
li t1,8
sw t1,0(t0)

Once the code is uploaded, we need only to set a flag to begin execution. The following code does just that.


la t0,0xA4040010  
li t1,%100101101  
sw t1,0(t0)

And with that, our code should execute on the RCP. Once the RCP is done, a flag is set. At this point, the results of the RCP's processor can be transferred to main memory and new code can be fed into the RCP.

While the Nintendo 64 may not have a practical use today, given the much more powerful hardware that is available, I still find it interesting to look at the technical limitations developers had to work with in the 90s.

Tuesday, July 11, 2017

The Old New Thing

The Microsoft Windows operating system is simply huge. Consisting of millions of lines of code, it took decades and thousands of man-hours to engineer it to what it is today. One developer who worked on Windows, Raymond Chen, wrote a blog about his experience working with on the OS. Known as The Old New Things, this blog answers many questions about quirks in the Windows operating system and why they exist.

Earlier this week, I read a blog post titled, "How do I create a topmost window that is never covered by other topmost windows?". In the post, Chen discusses his experience with a customer who was developing an application for kiosk devices. This application was designed to run on Windows machines that were being shown in stores such as Best Buy, where many other similar applications were present, created by many different vendors. This application would occasionally display an advertisement, and needed to be shown on-screen on top of all other applications. Windows comes with a feature where applications can be flagged as "always on top", giving them priority when rendered. However, all of the applications on the kiosk machines take advantage of this feature, causing multiple windows to clash for priority.

The customer Chen was speaking to asked if there was a way to become the "super top-most window". Chen states that this is simply not possible, pointing out the issues that would come if multiple applications attempted to use this feature. Chen instead suggests that the customer should coordinate with the vendors of these applications for who gets top-most privileges.

I found this blog post to be both humorous and informative. It provides a real-world example of programming issues developers face every day, explaining the issue in detail and proposing a solution. I would highly recommend this blog for anyone who is interested in programming, whether such interest relates to the Windows OS or not.

Monday, July 10, 2017

JIT Compilation: A Retrospective

Several weeks ago, I made a post about Just In Time Compilation. In the post I had made a modification to a Wikipedia article, adding extra information to better describe the applications of JIT technology. To recap, this was what the original text stated.
"JIT compilation can be applied to some programs, or can be used for certain capacities, particularly dynamic capacities such as regular expressions. For example, a text editor may compile a regular expression provided at runtime to machine code to allow faster matching – this cannot be done ahead of time, as the pattern is only provided at runtime. Several modern runtime environments rely on JIT compilation for high-speed code execution, including most implementations of Java, together with Microsoft's .NET Framework. Similarly, many regular expression libraries feature JIT compilation of regular expressions, either to bytecode or to machine code."

And here are the changes I made.
"JIT compilation can be applied to  some programs, or can be used for certain capacities, particularly dynamic capacities such as regular expressions. For example, a text editor may compile a regular expression provided at runtime to machine code to allow faster matching – this cannot be done ahead of time, as the pattern is only provided at runtime. Several modern runtime environments rely on JIT compilation for high-speed code execution, including most implementations of Java, together with Microsoft's .NET Framework. Similarly, many regular expression libraries feature JIT compilation of regular expressions, either to bytecode or to machine code. JIT compilation is also used in some emulators, in order to translate machine code from one CPU architecture to another." 
Looking back at this article, I found no changes to have been done to it. The "History" tab on the website shows my change to have been the last made. Perhaps this is due to a lack of interest on the topic. The article in question is already well written. Perhaps there was no need to make any further changes. Regardless, I stand by the modifications I made.