The Structure of a Game AI System
- Sensing the World
- Memory
- Analysis/Reasoning Core
- Action/Output System
- In Closing
- About This Article
Let's start to understand the structure of game AI systems by taking a virtual microscope and looking inside a single AI entity. It can be a Quake enemy, an Age of Empires army, or the creature from Black & White. Understanding the major building blocks will later help you structure and code your systems efficiently.
Fundamentally, AI systems come in two flavors. The first and most common is the agent, which is a virtual character in the game world. These are usually enemies, but can also be nonplaying characters, sidekicks, or even an animated cow in a field. For these kinds of entities, a biological structure must be followed, so we can somehow model their behavior realistically. Thus, these AI systems are structured in a way similar to our brain. It is easy to identify four elements or rules:
A sensor or input system
A working memory
A reasoning/analysis core
An action/output system
Some AIs are simpler than that and override some components. But this global framework covers most of the entities that exist. By changing the nature of each component, different approaches can be implemented.
The second type of AI entity is abstract controllers. Take a strategy game, for example. Who provides the tactical reasoning? Each unit might very well be modeled using the preceding rules, but clearly, a strategy game needs an additional entity that acts like the master controller of the CPU side of the battle. This is not an embodied character but a collection of routines that provide the necessary group dynamics to the overall system. Abstract controllers have a structure quite similar to the one explained earlier, but each subsystem works on a higher level than an individual.
Let's briefly discuss each element of the structure.
Sensing the World
All AIs need to be aware of their surroundings so they can use that information in the reasoning/analysis phase. What is sensed and how largely depends on the type of game you are creating. To understand this, let's compare the individual-level AI for a game like Quake to the abstract controller from Age of Empires.
In Quake, an individual enemy needs to know:
Where is the player and where is he looking?
What is the geometry of the surroundings?
Sometimes, which weapons am I using and which is he using?
So the model of the world is relatively straightforward. In such a game, the visual system is a gross simplification of the human one. We assume we are seeing the player if he's within a certain range, and we use simple algorithms to test for collisions with the game world. The sensory phase is essential to gathering information that will drive all subsequent analysis.
Now let's take a look at the sensory data used by the master controller in a strategy game, such as Age of Empires:
What is the balance of power in each subarea of the map?
How much of each type of resource do I have?
What is the breakdown of unit types: infantry, cavalry, and so on?
What is my status in terms of the technology tree?
What is the geometry of the game world?
Notice that these are not simple tests. For example, we need to know the geometry of the whole game world to ensure that the path finding works as expected for all units. In fact, the vast majority of the AI time in such a game is spent in resolving path-finding computations. The rest of the tests are not much easier. Computing the balance of power so we know where the enemy is and his spatial distribution is a complex problem. It is so complex that we will only recompute the solution once every N frames to maintain decent performance.
In many scenarios, sensing the game world is the slowest part of the AI. Analyzing maps and extracting valuable information from raw data is a time-consuming process.