|Thesis idea: offline collision detection||Post Reply|
|Aug 17 2014, 7:27am Anchor|
HiI'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.
|Aug 17 2014, 5:09pm Anchor|
The way I see it (and this is really just my personal opinion), is that artificial intelligence and physics are the two aspects of gaming that progressed the least over the last years.
I mean look at the graphics. We're getting closer and closer to photo-realistic/cinematic graphics in realtime. A lot of ground has been covered on that front over the course of the last (console) generation.
Let's take a look at AI. I feel like you could take a NPC from a modern game, swap the model for an old one, plug it into an old game and virtually nothing would change.
Physics? What was the last game that really revolutionized in-game physics? Half-Life 2, 10 years ago. Since then there were the occasional games that support GPU-accelerated PhysX to make worlds come to life (at quite a cost since graphics cards are usually at their limits with the visuals already). Other than that? I don't recall any.
Now to your idea:
Say your solution results in significant performance improvements, but requires ridiculous amounts of space. -> Not worth it.
If you were able to save a significant amount of CPU-cycles, they could be used somewhere else.
Again, this is just MY opinion and maybe your have thought it through already. It's an interesting idea nonetheless.
|Aug 23 2014, 5:01am Anchor|
Thanks a lot for your input,
And now for your first point - does it work in a lot of cases. You obviously have a good point, unfortunately, and what you call "non linearity" is clearly the reason that this method hasn't been applied so far. I'm not sure yet just how non-linear it is. It will naturally become a bit less restricted as disk sizes and network bandwidth improve, but not very significantly in the next few years. It's also restricted in other ways - my bodies must be rigid, and I might have to slightly correct their trajectories so that they match pre-calculated ones. It's clearly not going to revolutionize physics simulation by itself. It MIGHT be a tool in the toolbox, applicable and beneficial in some certain conditions - where we're especially interested in precise collisions of some very few, rigid, complex meshes (perhaps there are more meshes around which collide in standard ways). Does that sound good enough to you?
BTW, I'd be very grateful if you could direct me to some of the currently-used algorithms for precise collision detection! (either on cpu or gpu)
|Aug 26 2014, 1:16pm Anchor|
Well, there's the QuickHull algorithm that Valve (and I think Havok) uses. (Valve is building a new physics engine for Source 2 btw.) The Wikipedia article seems to be a bit short though.
Also: I don't think that physics over network will work good enough due to the latency.
(Good) games try to work with a frametime of less than 16 ms. That equals +60fps. 16ms is a damn low latency. For reference: Pinging my local router already takes 2-4 ms.
|Sep 3 2014, 12:20am Anchor|
The sniper, I agree with everything you said brother.. great posts.. just 1 factual error.. last game to revolutionize physics was MAX PAYNE 2 using the HAVOK engine.. first game to use those REAL type of physics for all gaming items.. (i remember playing for 2 hours JUST shooting physics driven objects around the room, was such a departure from usual game physics) Half life 2 came out a year later.
To the guy who started this thread... I may be missing something here, im no genius by a long shot, i barely breath without having concentrating hard, but it does seem to me that you would run into a couple of problems.. 2 biggies being these.. 1. FULL DATABASE!! - Say you do manage to work out a way to record the STAGGERING!!!! amount of computations needed to facilitate this database, how would you get the game to run through EVERY!!! scenario that COULD!! happen? you could run the game a billion times and record every time and still not get everything.. Even thinking on small scale, a single ROOM will have billions of different outcomes.. times that by an entire level.. its beyond my comprehension.. perhaps the whole thing is and im not making sense.. but ill continue none the less.. number 2. SIZE MATTERS!! - Even if you do SOMEHOW manage this gothmogian task of recorded this database, i have a feeling that it will require a substantial amount of space to store it.. this then brings up the matter of SEARCHING through it.. which , depending on the size, may add to the overhead SO much that its simply faster to calculate in real time... neways.. thats my 2 pence.. im not sure if ive made myself look an idiot.. nothing new if i have its a nice idea.. and i hope im wrong.. but after all said and done sniper is correct.. the state of the AI in games is still the same as 15 years ago.. which is a tragedy considering if they had great AI, the games would be 1000% more fun to play..
|Sep 3 2014, 2:20am Anchor|
It seems unfeasible to calculate every possible scenario for collision detection. Just think how many ways a single object can be rotated alone! Even if a database with all these calculations could be constructed, It would likely take longer to find the calculation in the database than to do the calculation. Most databases search in log(n) time, if n is sufficiently large, it will take longer than even using an (n^2) collision detection algorithm with a small n. Though I'm not sure of the precise relationship between the number of nodes in a mesh, and the number of ways in which it can collide. It may be viable for very complex meshes.
Storing precalculated values for physics could still be useful though. It may not be necessary to store every way in which a mesh can collide with another mesh, but to store some number of equidistant samples, whose values can be easily interpolated for fast approximations. Other calculations that are done frequently in physics calculations could also be stored, for example performing rotations on an object. To avoid network delay, a small pool of these calculations could be stored client side, representing the most common calculations of the client at that time, if a calculation isn't used frequently enough, it can be removed from the pool and some other calculation brought in from the server.
|Sep 3 2014, 5:12am Anchor|
@TheCreator - Try reading all posts, all you did was echo exactly what i wrote.. and 2ndly.. The word approximately shouldnt be used when talking about collision detection, considering good/bad detection can mean the difference between a smooth flowing feel and a buggy POS.
|Sep 3 2014, 7:38am Anchor|
1) The sniper - a very valid point about latency. my ping to google for example is 100ms roundtrip. Such a server would only be helpful for collisions that can be anticipated 5 frames ahead of time, which may or may not be very common, depending on the game and situation. If most objects are under predictable forces (i.e. gravity), I thought I'd check every two object trajectories for predicted collisions (n choose 2) between the bounding spheres every once in a while, and ask the server for detailed collision information regarding the few first ones. once two objects have collided and changed their trajectories, I only have to check their new trajectories for collisions with the rest (2n) and again ask the server for info about the oncoming ones.
2) Thanks for the reference to quickHull - that about puts things in perspective regarding current approaches. do you have anything for non-convex-objects? or are these rarely dealt with for computational reasons?
3) Now back to the size of the DB. Obviously I don't mean to save collisions for what you call "every" combination of the collision parameters. What's "every"? every possible floating point value stored in 4 bytes? No, I mean to calculate it for discrete values, and very few of them actually. More like 20 possible values for one angle for example. That doesn't mean I can interpolate the collision data however, unless, perhaps, we're dealing with convex objects, in which such an interpolation might look okay (correct me if you have an idea for somehow using nearset-neighbours collision information for any given collision).
4) Regarding the search time in the DB. The creator, I'm curious - why do you say databases use in logn time? I was planning to use a hashmap, whose average lookup time is constant. Will be glad to hear your meaning.
Many thanks for your ongoing input, it helps a LOT.
|Sep 3 2014, 11:18am Anchor|
Your idea contradicts itself: you say all possible collisions will be available. You then say when a bounding area is hit the computer calculates exactly HOW they will collide, then loads up a table of predicted results. To do that the computer must figure out the result of all the variables in the collision, then it plays back the data. Since it already knows the results to figure out which table calculation to use, it's figuring out the results twice: once in real time to know which collision to play, and once ahead of time to play a pre-calculated animation on all the objects.
That has been done, it's been done for decades. Those are just pre-rendered animations. IE hit a guy in the arm, play animation arm_ouch. Hit in head, play head_ouch. Except you're going to get many more animations, so instead of having 25 different pain animations, you want thousands.
With almost any engine since Quake that can be done, you just need to pre-render out the animations and have the system autochoose which ones to use. Many 3D modeling programs (some back to the late 90's at least) could even be scripted to pre-render all possible animations.
Because of the time it takes to do all this and check it, that's why it's been handed over to CPU/GPU processing. It's a lot faster and uses a lot less memory then doing it all by hand.
Go play some Quake 2: q2server.fuzzylogicinc.com
|Sep 3 2014, 4:20pm Anchor|
@rugrat - your idea sounds like a flight of fancy given the scope of modern games today.. having said that, every great idea or invention in the history of man was also (at one time or another) called a flight of fancy... so dont let mere conjecture put you off in any way. You seem like a smart guy, I sincerely hope you get something resembling a demo to show us all the thesis in motion, either way, please keep us all updated on your progress. cheers.
|Sep 4 2014, 12:00am Anchor|
I quote log(n) time for databases, because I think it is the most common, and is the running time for most implementations of SQL. However hash maps can produce constant time searches,(usually at the cost of higher memory usage, but not in the best case). Either way search time is only a real problem if the database is absurdly large (i.e. the number of possible values of a 16 byte quaternion). Also I agree that the margin of error for interpolation would be larger for non-convex objects, though I don't know how big that margin is, it would vary based on the number of samples. It would be interesting to see how many samples are needed to produce a margin of error below human perception.
I'm curious as to what keys you would use to search the database. something like (position, rotation, and model) for each object, or something that can be used to predict collisions like (velocity, position, angular velocity, angle, model)?
Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.