One of the challenges of designing a stealth combat system is that the AI needs to be smart, or probably a better way to put it, the AI needs to look smart! Generally most players will forgive AI doing the odd dumb thing or two in games, but if the AI manages to do something cool or funny before they die then it will be remembered with fondness. The AI don't need to have a GPS strapped to their back, they just need to look like they know where they are going!?!
Probably well over half of my time on this MOD has been spent trying to get the AI to look smart and make the right choices. That could be something as simple as avoiding obstacles in rooms like pillars or columns, to finding a good direction to run towards. My choice of game engine probably did not help because the Quake engine does have very basic AI that literally runs in straight lines at the player. If Quake AI hit an obstacle they push their way around the sides until they can see the player again and then run like crazy. The major problem with this kind of navigation is that the AI need simple room layouts or simple setups, otherwise they get stuck behind obstacles and trapped on ledges running on the spot.
The first step to the AI looking smart is a basic navigation system which will allow them to get around obstacles without looking too stupid. There are several approaches to this problem but I wanted something simple because complex systems have a habit of breaking and usually become a nightmare to fix. I designed a simple node system with limited choices, forward, back and/or a switch route. This worked really well with flat level designs but as soon as I added vertical elements to my test maps the system started producing erratic results.
To fix the node system I started adding visibility checks and trying to pick nodes based on distance and Z axis comparisons but this just got messy. As the code got more complex I eventually decided it was time for something simple again. I marked most nodes with vertical layer numbers and designed three, high, low and ground level routes for each room. The AI would read the vertical layer number and if it was close to their final destination Z axis they would use it. My test map grew with loads of different vertical designs till eventually the new system was producing too many wrong results and not looking very good.
Eventually the light bulb came on and I thought of a simple robust solution that worked with all the existing systems (nodes and vertical layers) When I was wandered around my test map I would come to a junction and ask myself, "is this route going in the right direction?" I was not asking how to get their exactly but a high level question, like using a signpost on the road. I split my test map into volumes and then added checks to the nodes at junctions so that the AI could ask a simple yes/no question, "Is my final destination inside of this volume?" The signpost queries worked so well that the AI could navigate around complex room designs and for the first time actually looked like they knew where they were going!