Suppose I have 2 3d objects in my world space. How to determine if they intersect, collide or not? I suppose i'd have to solve whether or not each vertex of one object belongs to any triangle of the second object.

First you check their raw bounding boxes / spheres for a colission

Then you refine this check...

Then you refine this check...

For the refined check you could check to see if any of the lines segments of either intersect the faces of the other.

What you do is check to see if the line segment intersects the infinite plane the triangle face sits on. If so you'll get the point of intersection then you check to see if the point is inside the face. This link helps with the second part, I can't remember any links to the first, sorry.

http://nehe.gamedev.net/data/articles/article.asp?article=10

What you do is check to see if the line segment intersects the infinite plane the triangle face sits on. If so you'll get the point of intersection then you check to see if the point is inside the face. This link helps with the second part, I can't remember any links to the first, sorry.

http://nehe.gamedev.net/data/articles/article.asp?article=10

I concur with BogdanOntanu here.

Here's the scheme I use, feel free to adapt it.

When I load my meshes, I call CalculateBoundingSphere which returns a radius value (and something else but I don't wanna confuse u)

This radius is the smallest which would form a sphere that would enclose our entire mesh object. I keep this radius value for later, and so far this is stuff you might read in other places. Later in realtime we can simply take any two objects by their origin (location) and quickly determine whether their spheres would intersect or not, based on the distance between them (again, nothing new so far). What I do differently is that I check the radius at several MAGNITUDES (ie I multiply it by several values larger than 1) and use that as a trigger for proximity :) If the objects are proximate within a threshold, then I calculate the magnitude of their "intergravity" based on the inverse cube of distance.

The resulting value is combined with a weighted need value if applicable, and the resulting value defines a 3d vector which is one component of the anticipated or "goal motivator" vector which determines the actions and motion of AI, and the result of interaction of "regular" objects.

Here's the scheme I use, feel free to adapt it.

When I load my meshes, I call CalculateBoundingSphere which returns a radius value (and something else but I don't wanna confuse u)

This radius is the smallest which would form a sphere that would enclose our entire mesh object. I keep this radius value for later, and so far this is stuff you might read in other places. Later in realtime we can simply take any two objects by their origin (location) and quickly determine whether their spheres would intersect or not, based on the distance between them (again, nothing new so far). What I do differently is that I check the radius at several MAGNITUDES (ie I multiply it by several values larger than 1) and use that as a trigger for proximity :) If the objects are proximate within a threshold, then I calculate the magnitude of their "intergravity" based on the inverse cube of distance.

The resulting value is combined with a weighted need value if applicable, and the resulting value defines a 3d vector which is one component of the anticipated or "goal motivator" vector which determines the actions and motion of AI, and the result of interaction of "regular" objects.

Thank you, guys.

You made it a bit clearer for me. I understand that i must check the bounding spheres first and in some cases it would be enough for AI to make decisions how to act and where to move. But what i can't understand is how to place an object (a game character) into game environment. Lets say I made a scene: some hilly landscape with a few houses in a single mesh. How to make a character, which is 100s times smaller than the whole scene walk there COLLIDING with buildings not passing through them, stand ON a hill (not a bit above or a bit dipped into it)? What is the common technique in this case? Do I have to cut my landscape into smaller pieces and work with them as with different meshes (calculate a bounding shere for each one) or do i have to check each landscape triangle for intersection with the character? In this case it takes lots of calculations and they should be made after applying all transforms to all objects. If the transforms are applied by vertex shader, I can't store the output results to make all the collision-detection calculations. Do I have to transform before shaders?

Yep. lots of questions. Any answer on any one of them would be greatly appreciated.

:)

You made it a bit clearer for me. I understand that i must check the bounding spheres first and in some cases it would be enough for AI to make decisions how to act and where to move. But what i can't understand is how to place an object (a game character) into game environment. Lets say I made a scene: some hilly landscape with a few houses in a single mesh. How to make a character, which is 100s times smaller than the whole scene walk there COLLIDING with buildings not passing through them, stand ON a hill (not a bit above or a bit dipped into it)? What is the common technique in this case? Do I have to cut my landscape into smaller pieces and work with them as with different meshes (calculate a bounding shere for each one) or do i have to check each landscape triangle for intersection with the character? In this case it takes lots of calculations and they should be made after applying all transforms to all objects. If the transforms are applied by vertex shader, I can't store the output results to make all the collision-detection calculations. Do I have to transform before shaders?

Yep. lots of questions. Any answer on any one of them would be greatly appreciated.

:)

Most games cheat here just a little bit.

Imagine if you will your average Lara Croft player model made of a relatively low number of triangles (maybe 700 to 1000 polys).

Now imagine the World has mabe 150 zillion polys.

We can use DX functions to perform interection tests of the following three things, without any math of our own:

1- a ray and a mesh

2- a ray and a subset of a mesh (a material group)

3- a ray and a triangle

We can take any one triangle in our model and think of it as three vertices, and we can imagine further ways to cast those vertices as rays, but just think how stupidly inefficient it would be testing ALL vertices against ALL triangles !!

So we cheat. Most gamecoders will treat the player as A DOT ON A MAP when it comes time to check for collisions of the player and the terrain.

Think about standing on a cliff edge in most 3D games you can turn without falling, but moving just a little will place the imaginary dot between your feet over the edge and you will fall, even though half of your body is actually now inside the cliff face lol bad.

Normally most 3D coders will tell the guys who make them models that they want the Y Zero for the model to be placed under the model's feet.

This makes life very easy if you imagine the world to be a flat plane with the ground at Y0, you can just set the player to Y0 and they stand on the ground.

Of course, cool games don't have a flat terrain, but keep reading please..

Now in realtime we cast a ray from our player into Y minus, and find the intersection of that ray and the terrain mesh, then determine which triangle we have intersected, now we test the ray against that triangle, which gives us the perfect amount we need to shift the model in Y to place it on the ground.

You may now be thinking, what if I want to test for collisions against walls and stuff, not just DOWN ? Well, the same code is applied, the only thing that is different is the direction of the test ray - please consider the above demonstration as describing a "gravity" test alone. Your game will probably keep a 3D vector describing the direction of motion of a moving player, and Gravity will already probably be a component angle, so you normally can just perform a single test using the player motion vector as the test ray.

I'm currently writing a 3D version of Pengo called Revenge of the Snow Bees :)

Imagine if you will your average Lara Croft player model made of a relatively low number of triangles (maybe 700 to 1000 polys).

Now imagine the World has mabe 150 zillion polys.

We can use DX functions to perform interection tests of the following three things, without any math of our own:

1- a ray and a mesh

2- a ray and a subset of a mesh (a material group)

3- a ray and a triangle

We can take any one triangle in our model and think of it as three vertices, and we can imagine further ways to cast those vertices as rays, but just think how stupidly inefficient it would be testing ALL vertices against ALL triangles !!

So we cheat. Most gamecoders will treat the player as A DOT ON A MAP when it comes time to check for collisions of the player and the terrain.

Think about standing on a cliff edge in most 3D games you can turn without falling, but moving just a little will place the imaginary dot between your feet over the edge and you will fall, even though half of your body is actually now inside the cliff face lol bad.

Normally most 3D coders will tell the guys who make them models that they want the Y Zero for the model to be placed under the model's feet.

This makes life very easy if you imagine the world to be a flat plane with the ground at Y0, you can just set the player to Y0 and they stand on the ground.

Of course, cool games don't have a flat terrain, but keep reading please..

Now in realtime we cast a ray from our player into Y minus, and find the intersection of that ray and the terrain mesh, then determine which triangle we have intersected, now we test the ray against that triangle, which gives us the perfect amount we need to shift the model in Y to place it on the ground.

You may now be thinking, what if I want to test for collisions against walls and stuff, not just DOWN ? Well, the same code is applied, the only thing that is different is the direction of the test ray - please consider the above demonstration as describing a "gravity" test alone. Your game will probably keep a 3D vector describing the direction of motion of a moving player, and Gravity will already probably be a component angle, so you normally can just perform a single test using the player motion vector as the test ray.

I'm currently writing a 3D version of Pengo called Revenge of the Snow Bees :)

Thank you very much, EvilHomer2k, for your detailed description.

I didn't know of those dx functions. Now it's all clear.

And it appears to be easier than i thought lol.

'Revenge of the Snow Bees' sounds terrifying.:)

How far have you got in your work?

What is this game about? I've never heard of Pengo here before.

Thanks again and good luck with your snow bees.:)

I didn't know of those dx functions. Now it's all clear.

And it appears to be easier than i thought lol.

'Revenge of the Snow Bees' sounds terrifying.:)

How far have you got in your work?

What is this game about? I've never heard of Pengo here before.

Thanks again and good luck with your snow bees.:)

Pengo was a silly 2D tile based arcade game, where you played Pengo the Penguin, whose sole reason for existing was to attempt to crush Snow Bees using organically grown snow cubes...

I'm not very far along with that one at all.

At the moment I'm concentrating on getting SkinMesh working correctly.

It's annoying me that I can be so close but can't see anything except messageboxes for now... progress is being made nonetheless.

Today I reworked the FrameLoader again so that it now expects the ParentFrame to be NULL for the first iteration. This triggers the FrameLoader that the current FrameNode is in fact Root and should be recorded as such in the containing SkinMesh structure. We no longer create a default RootFrame in SkinMesh_Create before calling the first iteration of FrameNode_Load. We have faith that the first object to be parsed will in fact always be a FrameNode.

As before, the FrameLoader / XFile Parser code seems to be 100%.

The problems arise when we attempt to "associate animation nodes with bones by name". I'll keep hammering away at it. Anyone want an update?

I'm not very far along with that one at all.

At the moment I'm concentrating on getting SkinMesh working correctly.

It's annoying me that I can be so close but can't see anything except messageboxes for now... progress is being made nonetheless.

Today I reworked the FrameLoader again so that it now expects the ParentFrame to be NULL for the first iteration. This triggers the FrameLoader that the current FrameNode is in fact Root and should be recorded as such in the containing SkinMesh structure. We no longer create a default RootFrame in SkinMesh_Create before calling the first iteration of FrameNode_Load. We have faith that the first object to be parsed will in fact always be a FrameNode.

As before, the FrameLoader / XFile Parser code seems to be 100%.

The problems arise when we attempt to "associate animation nodes with bones by name". I'll keep hammering away at it. Anyone want an update?

(giggles at the interection typo)

Another method is bounding volumes. In the game we're making at my work we have tanks. The base of the tank has one box and the turret has another box. Thats our collision volumes. It is faster than trying to check against a complicated mesh. It is very common for games to use this technique. The volumes can also double as physics bodies. These objects are hidden.

BoundingBoxes (or "volumes" as you called them) are ok if you don't plan on rotating anything , or if you can apply them to the untransformed object(s), but they have problems when you rotate the object and not rotate the bounding box... They will not detect the corners hanging outside the BB.

On the other hand, BoundingSphere is the opposite, it takes in the complete volume of the object at any orientation, and as such is definitely not suited to certain jobs like where the object has one dimension much longer than the others (long, thin objects).

The compromise is the use of "ellipses" rather than spheres.

Like a BoundingBox, the BoundingEllipse has three vector values.

They define three radii, and all other points on the elliptical sphere can be calculated using (preferrably spherical) inter[polation (aka slerping).

Of course, if your object is reasonably spherical, it would be the obvious choice to use a plain boundingsphere (math is fastest).

Again, I draw your attention to the use of an extended bounding sphere as a means to probe for proximity, before falling back onto something more robust.

There's no reason not to begin checking for mesh intersections once we are certain that there's a chance of that being true - but we can leave it at the first check if it fails - objects that aren't close aren't going to intersect :)

On the other hand, BoundingSphere is the opposite, it takes in the complete volume of the object at any orientation, and as such is definitely not suited to certain jobs like where the object has one dimension much longer than the others (long, thin objects).

The compromise is the use of "ellipses" rather than spheres.

Like a BoundingBox, the BoundingEllipse has three vector values.

They define three radii, and all other points on the elliptical sphere can be calculated using (preferrably spherical) inter[polation (aka slerping).

Of course, if your object is reasonably spherical, it would be the obvious choice to use a plain boundingsphere (math is fastest).

Again, I draw your attention to the use of an extended bounding sphere as a means to probe for proximity, before falling back onto something more robust.

There's no reason not to begin checking for mesh intersections once we are certain that there's a chance of that being true - but we can leave it at the first check if it fails - objects that aren't close aren't going to intersect :)

Rotation is not a problem. Using your 3D app, link the bounding box to it parent object. Make sure the pivots are the same. In your engine go through the object heirarchy, if the parent rotates or moves, apply the same transforms to its children. In our engine the tank turret does not rotate. There is a dummy object(a single point), that is the parent object. The tank turret and bounding box are linked as children to the dummy. They inherit the transforms applied to the parent(the dummy).

I thought that rotation was always a problem. Because if you don't rotate you just compare the point's coordinates to the coordinates of the sides of your bounding box. If the box is rotated you have to do additional calculations to determine if the point is inside or outside it.

Rotations are never a problem for a boundingsphere.

As for boundingboxes, they can be handled too.

Imagine if you will that your object keeps track of its own rotation values.

Imagine further that we keep the object in an unrotated state.

We apply the rotation values using matrix math, right before we draw the object.

This way, our rotation is not being "compounded", our rotations are always based on the unrotated object. This means we are continually recalculating the local rotation matrix of the object. That being the case, we COULD apply the same rotation matrix to the boundingbox, but this IS NOT NECESSARY.

We can perform UNROTATED tests on the UNROTATED object in its UNROTATED bounding box, if we apply INVERSE ROTATION to the OTHER element being tested.

In a situation where we want to test a world-relative ray against a rotated object, imagine that we calculate the INVERSE ROTATION MATRIX for the object, and apply it to the Ray instead. Now, we have a situation where the Ray has been rotated about the Object Origin by MINUS the object rotation angle(s). Now we can perform a test without rotating the object (which might contain hundreds of thousands of faces), and if we think about it more, we don't even need an inverse matrix either, if we only want to rotate a ray about an origin by known angles, we can use TRIG to do that using the old fashioned SINCOS rotation formula.

As for boundingboxes, they can be handled too.

Imagine if you will that your object keeps track of its own rotation values.

Imagine further that we keep the object in an unrotated state.

We apply the rotation values using matrix math, right before we draw the object.

This way, our rotation is not being "compounded", our rotations are always based on the unrotated object. This means we are continually recalculating the local rotation matrix of the object. That being the case, we COULD apply the same rotation matrix to the boundingbox, but this IS NOT NECESSARY.

We can perform UNROTATED tests on the UNROTATED object in its UNROTATED bounding box, if we apply INVERSE ROTATION to the OTHER element being tested.

In a situation where we want to test a world-relative ray against a rotated object, imagine that we calculate the INVERSE ROTATION MATRIX for the object, and apply it to the Ray instead. Now, we have a situation where the Ray has been rotated about the Object Origin by MINUS the object rotation angle(s). Now we can perform a test without rotating the object (which might contain hundreds of thousands of faces), and if we think about it more, we don't even need an inverse matrix either, if we only want to rotate a ray about an origin by known angles, we can use TRIG to do that using the old fashioned SINCOS rotation formula.

I love all these math tricks.:grin: :alright:

Another tip...

Generally, it's better to make your bounding boxes a bit smaller than your mesh geometry, rather than a bit larger.

Not for any mathematical reason, but because players get far more frustrated if they keep getting hit by shots that they know missed them, than if they keep getting missed by shots they thought were going to hit them for sure.

Generally, it's better to make your bounding boxes a bit smaller than your mesh geometry, rather than a bit larger.

Not for any mathematical reason, but because players get far more frustrated if they keep getting hit by shots that they know missed them, than if they keep getting missed by shots they thought were going to hit them for sure.

Interesting way of putting it :grin:

If you use a BoundingBox check for hit detection, it should be a primary check, before performing an actual intersection check :) You have a good point, a ray might pass through the bounding box without hitting a mesh within it. Should we rely on the BoundingBox check alone? Nope :) We should use it as a trigger to our more accurate and cycle-expensive intersection tests.

If you use a BoundingBox check for hit detection, it should be a primary check, before performing an actual intersection check :) You have a good point, a ray might pass through the bounding box without hitting a mesh within it. Should we rely on the BoundingBox check alone? Nope :) We should use it as a trigger to our more accurate and cycle-expensive intersection tests.