Post news RSS Getting some oversteer

This last month or so I’ve been working on squad control for Damzel. Squad control is perhaps the most important part of the game, because you spend most of your time doing it. It is important enough that it was worth spending those weeks trying out different control schemes and looking at the problem from different directions.

Posted by on

Reposted from www.mindflock.com

This last month or so I’ve been working on squad control for Damzel. Squad control is perhaps the most important part of the game, because you spend most of your time doing it. It is important enough that it was worth spending those weeks trying out different control schemes and looking at the problem from different directions.

One aspect of the squad control is how to actually move the squad around the world in a coherent manner. Luckily this is an area that has been studied in quite some depth in the general case of multi-agent steering, so there were plenty of ideas to draw on. However there are fewer game specific cases described well enough to take anything away from them, with the exception of Chris Jurney’s article in AI Wisdom 4(?) on the squad movement he implemented in Company Of Heroes.

I dont want to get into too much technical detail here, but if you are interested, you should be looking at keywords like steering behaviors, reciprocal velocity obstacles and helbing’s social forces model. That should get you started down the rabbit hole that is crowd simulation and navigation.

The problem is that essentially, you want a squad of agents to be able to move freely around an arbitrary 3D world. A world which can have both static and dynamic obstacles, with various forms of representation. The whole area is a PhD in itself (well, actually several I’ve read so far, but I digress), but I prefer to think of things from a social moddelling perspective. There’s a paper by Helbing that described a “social forces” model for navigation.

For all intents and purposes this is what you get with Craig Reynolds steering behaviors, which is what I’ve been partially implementing. I implemented some of the steering behaviors partly because they are exceptionally simple and partly because I wanted to test some ideas out. I added a seperation force, which keeps agents apart while moving, essentially adding a force that repulses both agents inversely proportional to their distance, so the closer they come together the more they are repulsed away from each other.

Seperation isn’t enough to get agents actually moving in any sort of human like way though. In order to do that, you have to consider the issue of avoiding other agents. I implemented a reynolds-like avoidance method which you can see in this video.


As you can see from the video, there’s another problem at play here. In this case, I’m only ever avoiding the agent that is closest to me in terms of its time to collide with me. Ultimately this isn’t enough because when you are actually moving through a crowd, you are taking into account ALL of the crowd you are aware of, as well as considering things like the density of crowd in front of you, the social makeup of the crowd in terms of friendly, enemy or unknown, plus the usual navigation like how much you really want to go in that direction.

There’s a technique for handling many agents moving within a crowd that works quite well. This is called reciprocal velocity obstacles (RVO) and it uses some relatively easy to compute mathematics to create a known point in velocity space that you can achieve collision free movement. However RVO’s literally only deal with avoidance and don’t take into account social preferences or crowd density.

Ultimately, I’m heading towards an implementation which uses a sampled RVO approach, because using a number of samples in a pattern in front of the agent should be faster than actually calculating the ideal collision free velocity. In addition the sample approach can adapt quite well to a social forces modelling viewpoint by biasing the sample results with social information (scaling down the preference for a sample that moves us towards an enemy rather than away for instance).

Sounds like a complicated approach to take for moving agents around a world? Thats because it is a pretty complex problem. But I think the complexity is required for the game to be at all convincing.

Currently I’m trying to be pragmatic about it and get some more “game” done before I go back and finally implement the sampled RVO methods. I’m definitely pushing to get a number of weapon types implemented (when I say weapon, I include devices like the persuadertron in this). With the added weapons, the game will start to feel a lot more like a game you might want to play, with the options of weapons and subsequent behaviors opening up the design space for the game design and offering interesting avenues for the mission generator.

We’ll have to see how this next sprint goes.

Comments
Kamikazi[Uk]
Kamikazi[Uk]

Looks cool, i loving seeing ai videos.

Reply Good karma Bad karma+2 votes
BrainCandy
BrainCandy

Keep it up Zombapup, it looks both very interesting and promissing. Just be careful that all the gameplay feedback you give the player is as clear as possible, specially if it is original (we have the same problem)!

Reply Good karma Bad karma+1 vote
werty2517
werty2517

What is this?!

Reply Good karma Bad karma0 votes
Post a comment
Sign in or join with:

Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.