Object Tracking
One of the first problems we must deal with in any action AI system is maintaining eye contact with a target, whether it's moving or static. This is used everywhere in action-oriented games, from rotating a static shooting device (like the turrets in Star Wars' trenches) to aiming at an enemy we are chasing or evading, or basic navigation (to track a waypoint we must reach).
Eye contact can be formulated as, given an orientation (both position and orientation angles) and a point in space, computing the best rotation to align the orientation with the point. It can be solved in a variety of ways, which I'll cover one by one in the following sections.
Eye Contact: 2D Hemiplane Test
If we are in a 2D world, we can solve the eye contact problem easily with very little math. Let's first take a look at the variables involved in our system. I will assume we are in a top-down view, overseeing the X,Z plane.
The first step is to compute whether hispos effectively lies to the left or right of the line formed by mypos and myyaw. Using parametric equations, we know the line can be represented as:
where t is the parameter we must vary to find points that lie on the line. Solving for t, we get the implicit version of the preceding equation, which is as follows:
By making this into a function, we know the points on the line are
If we have a point in space and test it against our newly created function, we will have
F(X,Z) > 0 (it lies to one side of the line)
F(X,Z) = 0 (it lies exactly on the line)
F(X,Z) < 0 (it lies to the opposite side)
This test, which is illustrated in Figure 7.2, requires:
3 subtractions
2 divisions
1 comparison (to extract the result)
Figure 7.2 Hemispace test.
We can speed the routine up if we need to do batches of tests (against several targets) by calculating the expensive trigonometric functions only once, and storing them as the inverse so we can save the expensive divides. The formula will be
If we do these optimizations, the overall performance of the test will be (for N computations):
3*N subtractions
2*N multiplies
2 trigonometric evaluations
2 divisions (for the invert)
N comparisons (for the result)
which is fairly efficient. For sufficiently large batches, the cost of divisions and trigonometric evaluations (which can be tabulated anyway) will be negligible, yielding a cost of three subtractions and two multiplies.
For completeness, here is the C code for the preceding test:
int whichside(point pos, float yaw, point hispos) // returns Ð1 for left, 0 for aligned, and 1 for right { float c=cos(yaw); float s=sin(yaw); if (c==0) c=0.001; if (s==0) s=0.001; float func=(pos.x-hispos.x)/c Ð (pos.z-hispos.z)/s; if (func>0) return 1; if (func==0) return 0; if (func<0) return Ð1; }
3D Version: Semispaces
To compute a 3D version of the tracking code, we need to work with the pitch and yaw angles to ensure that both our elevation and targeting are okay. Think of a turret in the Star Wars Death Star sequence. The turret can rotate around the vertical axis (yaw) and also aim up and down (changing its pitch). For this first version, we will work with the equations of a unit sphere, as denoted by:
x = cos(pitch) cos(yaw) y = sin(pitch) z = cos(pitch) sin(yaw)
Clearly, the best option in this case is to use two planes to detect both the left-right and the above-below test. One plane will divide the world into two halves vertically. This plane is built as follows:
point pos=playerpos; point fwd(cos(yaw),0,sin(yaw)); fwd=fwd+playerpos; point up(0,1,0); up=up+playerpos; plane vertplane(pos, fwd, up);
Notice how we are defining the plane by three passing points. Thus, the plane is vertical, and its normal is pointing to the left side of the world, as seen from the local player position.
The second plane is built with the pitch and divides the world into those points above and below the current aiming position. Here is the code:
point pos=playerpos; point fwd(cos(pitch)*cos(yaw), sin(pitch), cos(pitch) sin(yaw)); fwd=fwd+playerpos; point left(cos(yaw+PI/2),0,sin(yaw+PI/2)); left=left+playerpos; plane horzplane(pos,fwd,left);
In this case, the normal points up. Then, all we need to do to keep eye space with a point in 3D space is compute the quadrant it's located at and react accordingly:
if (vertplane.eval(target)>0) yaw-=0.01; else yaw+=0.01; if (horzplane.eval(target)>0) pitch-=0.01; else pitch+=0.01;
Notice that the signs largely depend on how you define pitch and yaw to be aligned. Take a look at Figure 7.3 for a visual explanation of the preceding algorithm.
Figure 7.3 Semispace test, 3D