There was a mathematician called Minkowski who found a way to perform math operations with 3D convex polygons.

He is most famous for an equation called the Minkowski Sum, which is something like this:

For two bodies A and B:

For X = each point in A

For Y = each Point in B

Q = X + Y

Add Q to list

EndFor

EndFor

For collision detection purposes, we should be very interested in an equation called the Minkowski Difference.

Instead of collecting a list of the sum of all combinations of two points, we want to collect a list of the difference..

Q = X - Y

Furthermore, having generated this 'cloud of points', we are actually only interested in those points which lay apon the 'surface', forming a 'convex hull'... this is known as the 'surface' of the Minkowski difference.

Please watch this video, I've watched it several times.

http://mollyrocket.com/849

This collision test supports optimizing based on our prior knowledge of the input geometries, it works for any pair of 3D objects at any orientation, and it's faster than it sounds.

This is gold.

He is most famous for an equation called the Minkowski Sum, which is something like this:

For two bodies A and B:

For X = each point in A

For Y = each Point in B

Q = X + Y

Add Q to list

EndFor

EndFor

For collision detection purposes, we should be very interested in an equation called the Minkowski Difference.

Instead of collecting a list of the sum of all combinations of two points, we want to collect a list of the difference..

Q = X - Y

Furthermore, having generated this 'cloud of points', we are actually only interested in those points which lay apon the 'surface', forming a 'convex hull'... this is known as the 'surface' of the Minkowski difference.

Please watch this video, I've watched it several times.

http://mollyrocket.com/849

This collision test supports optimizing based on our prior knowledge of the input geometries, it works for any pair of 3D objects at any orientation, and it's faster than it sounds.

This is gold.

The algorithm is based apon a particular property of the Minkowski Difference.

If the Origin of the World is inside the 3D convex hull we know as the Minkowski Difference, then the two input polygons are intersecting.

The GJK algorithm uses an incremental search to find a 3D tetrahedron which is formed from the points of the Minkoswki Difference and which encloses the Origin.

If we can find a tetrahedron which encloses the Origin, then the two shapes intersect.

For each shape, we write a function which, given a direction, can find the bounding point furthest along in that direction. This is our 'Support' function.That is the core function apon which this algorithm is based.

In order to test for intersection of two bodies, it is sufficient to begin with any point that lays apon the surface of the Minkowski difference - ie, "any point in A minus any point in B".

Luckily, I've already provided functions to transform vectors from worldspace to bodyspace and back again.

This means we can transform a WorldSpace direction into each BodySpace in order to find the maximal point in any given direction.

In bodyspace, how do we find the furthest point in a given direction?

We simply use a DotProduct (direction, point).

If its negative, then the point is in the wrong direction, its backwards, so its not useful.

And if its positive, we want the one that is most positive.

We don't need to normalize anything, we don't care about scale, just direction.

So the 'Support' function can be very optimized based on the shape.

In order to search for a tetrahedron which encloses the origin, we use a thing called a Simplex.

It is a set of 2, 3 or 4 points taken from the Minkowski Difference.

You can think of it as an Edge, a Triangle or a Tetrahedron.

This list of points is central to our search for a tetrahedron which encloses the Origin.

We use it to 'walk' the surface of the Minkowski sum, always seeking the direction of the Origin from our current Simplex.

It can return the penetration / separation distance for the pair, which is very nice since we can then apply our Penetration Correction code in order to move the bodies back out of penetration based on their linear and angular motions, and accelerate our search for Time of Impact.

If the Origin of the World is inside the 3D convex hull we know as the Minkowski Difference, then the two input polygons are intersecting.

The GJK algorithm uses an incremental search to find a 3D tetrahedron which is formed from the points of the Minkoswki Difference and which encloses the Origin.

If we can find a tetrahedron which encloses the Origin, then the two shapes intersect.

For each shape, we write a function which, given a direction, can find the bounding point furthest along in that direction. This is our 'Support' function.That is the core function apon which this algorithm is based.

In order to test for intersection of two bodies, it is sufficient to begin with any point that lays apon the surface of the Minkowski difference - ie, "any point in A minus any point in B".

Luckily, I've already provided functions to transform vectors from worldspace to bodyspace and back again.

This means we can transform a WorldSpace direction into each BodySpace in order to find the maximal point in any given direction.

In bodyspace, how do we find the furthest point in a given direction?

We simply use a DotProduct (direction, point).

If its negative, then the point is in the wrong direction, its backwards, so its not useful.

And if its positive, we want the one that is most positive.

We don't need to normalize anything, we don't care about scale, just direction.

So the 'Support' function can be very optimized based on the shape.

In order to search for a tetrahedron which encloses the origin, we use a thing called a Simplex.

It is a set of 2, 3 or 4 points taken from the Minkowski Difference.

You can think of it as an Edge, a Triangle or a Tetrahedron.

This list of points is central to our search for a tetrahedron which encloses the Origin.

We use it to 'walk' the surface of the Minkowski sum, always seeking the direction of the Origin from our current Simplex.

It can return the penetration / separation distance for the pair, which is very nice since we can then apply our Penetration Correction code in order to move the bodies back out of penetration based on their linear and angular motions, and accelerate our search for Time of Impact.

As I mentioned, we're interested only in the points of the Minkowski Difference which lay apon its (convex) surface (and not any points which fall inside its volume).

For two input bodies A and B, this means we're interested in the most extreme points of A and B that are in opposite directions in worldspace ... if we're given a direction D in worldspace, and we find the most extreme point on the (oriented) body A in that direction, then we must also grab the the most extreme point on B in the MINUS D direction, then subtract B from A. This gives us a guarantee that the point we calculated is on the very outside boundary of the surface represented by the Minkowski difference.

As Casey points out in his video presentation, it is sufficient to begin the GJK algorithm with any point on the surface of the Minkowski difference - ie, any point in A minus any point in B is good enough to start with.

But bearing in mind that our goal is to attempt to enclose the Origin with a tetrahedron, we can pick our starting point with a little more intelligence, by selecting an intelligent Direction for our first call to our Support function.

For 'swept' GJK testing, we might use the vector of the relative motions of A and B.

For 'instantaneous' testing, simply the vector between their origins.

Again, as Casey points out, it shouldn't make a whole lot of difference, since this algorithm is very good at 'converging' apon a solution within a small number of iterations (search steps).

In regards to the Support function, I mentioned that we can optimize this function based on our prior knowledge of the specific geometry we're working with.

Let's have a look at my Support function for a Box:

We can see that I choose to transform the Direction vector into BodySpace so that I can more intelligently select the point which is furthest in that direction.

Once I've got the direction relative to the origin of the Box (rather than the World), I can just check the sign of each axis in the vector to determine which 'octant' we're pointing at, and therefore, which corner vertex we want.

For polygons whose geometry is arbitrary, we have another test.

Because we're keeping a copy of the WORLDSPACE (oriented and translated) bounding points for each Body (updated by the Integrate method), we're able to work directly in WorldSpace.

We will find the Dotproduct of the Direction and each Point.

We wish to find the Point which produces the most positive Dot, as it is the point furthest in that direction.

Points which produce a negative dot are 'behind' our direction, ie, in the wrong direction, and not interesting to us.

Even though the geometry of the convex polygon is not clearly defined, we are nonetheless able to find the maximal boundingpoint in a given direction :)

We can go ahead and write Support functions for all the geometric primitives that we might use, eg capsules, cones, ellipses etc, even though their surfaces are only described 'implicitly', and not as a set of bounding points, as long as we're able to return a point on the surface of the body in a given direction, we're happy :)

For two input bodies A and B, this means we're interested in the most extreme points of A and B that are in opposite directions in worldspace ... if we're given a direction D in worldspace, and we find the most extreme point on the (oriented) body A in that direction, then we must also grab the the most extreme point on B in the MINUS D direction, then subtract B from A. This gives us a guarantee that the point we calculated is on the very outside boundary of the surface represented by the Minkowski difference.

As Casey points out in his video presentation, it is sufficient to begin the GJK algorithm with any point on the surface of the Minkowski difference - ie, any point in A minus any point in B is good enough to start with.

But bearing in mind that our goal is to attempt to enclose the Origin with a tetrahedron, we can pick our starting point with a little more intelligence, by selecting an intelligent Direction for our first call to our Support function.

For 'swept' GJK testing, we might use the vector of the relative motions of A and B.

For 'instantaneous' testing, simply the vector between their origins.

Again, as Casey points out, it shouldn't make a whole lot of difference, since this algorithm is very good at 'converging' apon a solution within a small number of iterations (search steps).

In regards to the Support function, I mentioned that we can optimize this function based on our prior knowledge of the specific geometry we're working with.

Let's have a look at my Support function for a Box:

;Given a Direction vector (in WorldSpace),

;this method returns the bounding point which is furthest in that Direction.

;The point is returned in WorldSpace.

;** Returns Vec3 on FPU

Method CollisionBox.Get_GJK_Support,uses esi,pvDirection_WorldSpace

LOCAL vDir:Vec3

SetObject esi

;Transform the Direction vector from WorldSpace to BodySpace

mov edx,pvDirection_WorldSpace

Mat33_TransMult_Vec3 .OldState.Orientation,.Vec3 ;unrotate

Vec3_Stow vDir

;Select the Support vertex via the sign of the axes

fld .fHalfWidth

.ifBitSet vDir.x, BIT31

fchs

.endif

fld .fHalfHeight

.ifBitSet vDir.y, BIT31

fchs

.endif

fld .fHalfLength

.ifBitSet vDir.z, BIT31

fchs

.endif

;Transform the 'best' point back into WorldSpace

Vec3_Stow vDir

OCall esi.Transform_Vec3_BodyToWorld, addr vDir

MethodEnd

We can see that I choose to transform the Direction vector into BodySpace so that I can more intelligently select the point which is furthest in that direction.

Once I've got the direction relative to the origin of the Box (rather than the World), I can just check the sign of each axis in the vector to determine which 'octant' we're pointing at, and therefore, which corner vertex we want.

For polygons whose geometry is arbitrary, we have another test.

Because we're keeping a copy of the WORLDSPACE (oriented and translated) bounding points for each Body (updated by the Integrate method), we're able to work directly in WorldSpace.

We will find the Dotproduct of the Direction and each Point.

We wish to find the Point which produces the most positive Dot, as it is the point furthest in that direction.

Points which produce a negative dot are 'behind' our direction, ie, in the wrong direction, and not interesting to us.

;Given a Direction vector (in WorldSpace),

;this method returns the bounding point which is furthest in that Direction.

;Point is returned in WorldSpace.

;** Returns Vec3 on FPU

Method CollisionPolygon.Get_GJK_Support,uses esi edi,pvDirection_WorldSpace

LOCAL fmax:real4

LOCAL pmax

LOCAL fCur:real4

SetObject esi

mov edi,.pOwner

xor ebx,ebx

mov fmax,ebx

.while ebx<.CollisionBody.BoundingPoints.dCount

mov eax,sizeof Point

mul ebx

add eax,.pWorldBoundingPoints

mov edx,pvDirection_WorldSpace

Vec3_Dot .Vec3,.Vec3

fst fCur

.ifBitClr fCur,BIT31 ;negative is not useful to us - direction is backwards

fsub fmax

fstpReg edx

.ifBitClr edx,BIT31 ;if cur-max is positive, then cur>=max

mov pmax,eax ;we got a new 'best candidate'

m2m fmax, fCur

.endif

.endif

inc ebx

.endw

mov eax,pmax

Vec3_Load .Vec3

MethodEnd

Even though the geometry of the convex polygon is not clearly defined, we are nonetheless able to find the maximal boundingpoint in a given direction :)

We can go ahead and write Support functions for all the geometric primitives that we might use, eg capsules, cones, ellipses etc, even though their surfaces are only described 'implicitly', and not as a set of bounding points, as long as we're able to return a point on the surface of the body in a given direction, we're happy :)

So, we now know how to write the Support function for each 3D shape.

If we're implementing GJK, we need a function that works for a pair of arbitrary bodies.

This new function must return a Support point for the Minkowski Difference of two bodies.

So, let's rephrase the problem.

Given two Bodies, and a Direction vector in worldspace, return a point on the surface of the Minkowski Difference (MD).

What we want is the support on A in Direction D, minus the support on B in Direction MINUS D.

vA=A.Support(D) ... support on A, direction D

vB=B.Support(-D) ... support on B, direction -D

vC=vA-vB ... support on Minkowski Difference, direction D

Now that we can see clearly how to find Supports on the MD surface, we can start to examine the rest of the algorithm.

If we're implementing GJK, we need a function that works for a pair of arbitrary bodies.

This new function must return a Support point for the Minkowski Difference of two bodies.

So, let's rephrase the problem.

Given two Bodies, and a Direction vector in worldspace, return a point on the surface of the Minkowski Difference (MD).

What we want is the support on A in Direction D, minus the support on B in Direction MINUS D.

vA=A.Support(D) ... support on A, direction D

vB=B.Support(-D) ... support on B, direction -D

vC=vA-vB ... support on Minkowski Difference, direction D

Now that we can see clearly how to find Supports on the MD surface, we can start to examine the rest of the algorithm.

The complete GJK algorithm has three parts.

So far we've taken a close look at the Support function (for MD, and its subfunction for each shape).

The main part of the GJK algorithm is a closed loop with two exit conditions.

Let's take a look at it.

I'm writing the pseudocode for this, look below for a step by step explanation.

1. S = Support(?)

2. [] = S

3. D = -S

4. LOOP

5. A = Support(D)

6. IF Dot(A,D) < 0

7. (exit - there is no intersection)

8. ENDIF

9. [] += A

10. IF (DoSimplex([],D) = TRUE

11. (exit - there IS an intersection)

12. ENDIF

13. UNTIL 0

1: find S = a Support on the MD in ANY DIRECTION.

2: add this Point to a container that will represent our current Simplex.

3. set D = -S ... given that S is a point on the surface of MD, we can imagine a scaled vector from the World Origin in the direction of S, and possibly on the other side of the World Origin.. remember, GJK doesn't need normalized direction vectors, scale is not important. D is the direction that goes from S back toward the World Origin. From now on I'll just mention 'origin' and it means 'World Origin'.

4. Begin an infinite Loop.. we'll break out of it if we need to.

5. find A = a Support on the MD in the specific direction D

This is the point that is furthest away from S in the direction of the origin.

6 - 7 - 8. Use a dotproduct to compare our current search direction D and the direction (from the Origin to) A.

If its negative, then S and A are on the same side of the Origin - we never actually reached the origin, so we can't possibly enclose it... there can't possibly be any intersection of this pair of bodies, so we're done.

If its positive, then S and A are on opposite sides of the origin, and so we'll continue.

9. Add the new point to our Simplex

10 11 12. Perform magical Simplex function , handing it our Simplex and D(irection).

If the result is TRUE, then bodies A and B are intersecting, we're done.

13. Repeat loop until the cows come home.

In my next post, we'll look at the third and final part of this algorithm, the magical and mysterious Simplex function, where we'll learn exactly what a Simplex is, because we'll be using it.

So far we've taken a close look at the Support function (for MD, and its subfunction for each shape).

The main part of the GJK algorithm is a closed loop with two exit conditions.

Let's take a look at it.

I'm writing the pseudocode for this, look below for a step by step explanation.

1. S = Support(?)

2. [] = S

3. D = -S

4. LOOP

5. A = Support(D)

6. IF Dot(A,D) < 0

7. (exit - there is no intersection)

8. ENDIF

9. [] += A

10. IF (DoSimplex([],D) = TRUE

11. (exit - there IS an intersection)

12. ENDIF

13. UNTIL 0

1: find S = a Support on the MD in ANY DIRECTION.

2: add this Point to a container that will represent our current Simplex.

3. set D = -S ... given that S is a point on the surface of MD, we can imagine a scaled vector from the World Origin in the direction of S, and possibly on the other side of the World Origin.. remember, GJK doesn't need normalized direction vectors, scale is not important. D is the direction that goes from S back toward the World Origin. From now on I'll just mention 'origin' and it means 'World Origin'.

4. Begin an infinite Loop.. we'll break out of it if we need to.

5. find A = a Support on the MD in the specific direction D

This is the point that is furthest away from S in the direction of the origin.

6 - 7 - 8. Use a dotproduct to compare our current search direction D and the direction (from the Origin to) A.

If its negative, then S and A are on the same side of the Origin - we never actually reached the origin, so we can't possibly enclose it... there can't possibly be any intersection of this pair of bodies, so we're done.

If its positive, then S and A are on opposite sides of the origin, and so we'll continue.

9. Add the new point to our Simplex

10 11 12. Perform magical Simplex function , handing it our Simplex and D(irection).

If the result is TRUE, then bodies A and B are intersecting, we're done.

13. Repeat loop until the cows come home.

In my next post, we'll look at the third and final part of this algorithm, the magical and mysterious Simplex function, where we'll learn exactly what a Simplex is, because we'll be using it.

For the purposes of GJK, a Simplex is a group of two, three, or four points which describe an Edge, a Triangle or a Tetrahedron respectively.

The GJK algorithm searches the surface of the MD for a Tetrahedron which encloses the Origin. It does this by building an edge which encloses the origin, then a triangle which encloses the origin, then trying to find a fourth point to form a tetrahedron which encloses the origin, trying new points at the same level of complexity or falling back to the next lower order of complexity if the origin is not enclosed.

The Simplex describes the current test geometry during this search.

If you look back to the main algorithm in the previous post, you'll notice that when we first call the DoSimplex function, our Simplex contains two points.

In fact, it will never contain less than two, and never more than four.

Rather than describing what DoSimplex does, I'll begin by describing its input and output.

Given a Simplex of 2,3 or 4 points and our current D(irection of search), it will return a new Simplex, and it may return a new D.

So, how does DoSimplex work, and what does it look like?

Basically, inside this function will be some code to handle each of the three levels of complexity of the input simplex ... ie, there will code to deal with an input Edge, code to deal with an input Triangle, and code to deal with an input Tetrahedron.

I'll use Casey's naming convention, which is that 'Point A is the most recent point obtained from the Support function'.

Assuming we have just started the algorithm, the very first point we obtained is B, and the second point (obtained in the direction from B to the Origin) is A.

| |

B---------A

3 | 2 | 1

See that we can visualize three 'Voronoi Regions'.... it might not have occurred to you that even an Edge can be broken into Voronoi Regions, hehe...

We wish to determine which Region the Origin lays within.

It's not immediately obvious, but the Origin cannot possibly be in Region 3.

B is the first point we chose, and A was chosen to be in the direction of the origin.

So we know that the origin cannot be 'behind' B.

It might be somewhere between A and B (region 2), or somewhere on the other side of A (region 1). But we can totally forget about region 3.

In order to find out which region the origin is in, we'll cast a ray backwards from A to B... we'll compare the direction of AB to the direction of AO.

"Is the arrow from A to B pointing roughly the same way as the arrow from A to Origin?"

Our pseudocode for this test is:

IF Dot(AB, AO) > 0 (direction from A to Origin is similar to direction from A to B, therefore Origin is between A and B)

D = AB cross AO cross AB (new direction is perpendicular to AB and AO)

[] = AB (no change to simplex, we're happy with this Edge, we want to try to add a third point and so form a Triangle)

ELSE (direction AO is NOT similar to direction AB, so the Origin is NOT between A and B, its somewhere behind A, we failed to enclose the origin with this simplex)

D = AO (new direction is from A toward the origin)

[] = (we reduced the simplex, we discarded the OLD point, we want to try a new Edge starting at the newer point A)

ENDIF

ok, so if the Origin fell into region 2 (ie between A and B), we'll set a new search direction perpendicular to the existing edge, and when we return and add a new point in that direction, we are actually trying to form a triangle around the origin.

But if the Origin fell into region 1 (the far side of A), then edge BA did not reach the Origin yet, so we'll correct our direction to AO (from the new point toward the origin), and we'll dump our older point B from the simplex. When we return and add a new point in the new direction, we'll actually be forming a new Edge whose endpoints are more likely to fall on either side of the Origin (ie the previous case, region 2).

Note that we are once more using the sign of a dotproduct when comparing direction vectors, just like we do in the main algorithm.

So, there you have it, that is all we need to do for the Edge Simplex.

You can now understand how the main algorithm will either try another edge (same complexity), or form a triangle (higher complexity).

My next post will be about handling the Triangle Simplex.

The GJK algorithm searches the surface of the MD for a Tetrahedron which encloses the Origin. It does this by building an edge which encloses the origin, then a triangle which encloses the origin, then trying to find a fourth point to form a tetrahedron which encloses the origin, trying new points at the same level of complexity or falling back to the next lower order of complexity if the origin is not enclosed.

The Simplex describes the current test geometry during this search.

If you look back to the main algorithm in the previous post, you'll notice that when we first call the DoSimplex function, our Simplex contains two points.

In fact, it will never contain less than two, and never more than four.

Rather than describing what DoSimplex does, I'll begin by describing its input and output.

Given a Simplex of 2,3 or 4 points and our current D(irection of search), it will return a new Simplex, and it may return a new D.

So, how does DoSimplex work, and what does it look like?

Basically, inside this function will be some code to handle each of the three levels of complexity of the input simplex ... ie, there will code to deal with an input Edge, code to deal with an input Triangle, and code to deal with an input Tetrahedron.

I'll use Casey's naming convention, which is that 'Point A is the most recent point obtained from the Support function'.

Assuming we have just started the algorithm, the very first point we obtained is B, and the second point (obtained in the direction from B to the Origin) is A.

| |

B---------A

3 | 2 | 1

See that we can visualize three 'Voronoi Regions'.... it might not have occurred to you that even an Edge can be broken into Voronoi Regions, hehe...

We wish to determine which Region the Origin lays within.

It's not immediately obvious, but the Origin cannot possibly be in Region 3.

B is the first point we chose, and A was chosen to be in the direction of the origin.

So we know that the origin cannot be 'behind' B.

It might be somewhere between A and B (region 2), or somewhere on the other side of A (region 1). But we can totally forget about region 3.

In order to find out which region the origin is in, we'll cast a ray backwards from A to B... we'll compare the direction of AB to the direction of AO.

"Is the arrow from A to B pointing roughly the same way as the arrow from A to Origin?"

Our pseudocode for this test is:

IF Dot(AB, AO) > 0 (direction from A to Origin is similar to direction from A to B, therefore Origin is between A and B)

D = AB cross AO cross AB (new direction is perpendicular to AB and AO)

[] = AB (no change to simplex, we're happy with this Edge, we want to try to add a third point and so form a Triangle)

ELSE (direction AO is NOT similar to direction AB, so the Origin is NOT between A and B, its somewhere behind A, we failed to enclose the origin with this simplex)

D = AO (new direction is from A toward the origin)

[] = (we reduced the simplex, we discarded the OLD point, we want to try a new Edge starting at the newer point A)

ENDIF

ok, so if the Origin fell into region 2 (ie between A and B), we'll set a new search direction perpendicular to the existing edge, and when we return and add a new point in that direction, we are actually trying to form a triangle around the origin.

But if the Origin fell into region 1 (the far side of A), then edge BA did not reach the Origin yet, so we'll correct our direction to AO (from the new point toward the origin), and we'll dump our older point B from the simplex. When we return and add a new point in the new direction, we'll actually be forming a new Edge whose endpoints are more likely to fall on either side of the Origin (ie the previous case, region 2).

Note that we are once more using the sign of a dotproduct when comparing direction vectors, just like we do in the main algorithm.

So, there you have it, that is all we need to do for the Edge Simplex.

You can now understand how the main algorithm will either try another edge (same complexity), or form a triangle (higher complexity).

My next post will be about handling the Triangle Simplex.

Below is a Triangle excuse the ascii art.

I'm going to draw it the same way that Casey did, however it should be mentioned that the conventional winding order for triangles is COUNTER clockwise... this will matter when we need to know which is the 'front' of the triangle and which is the 'back' of it.

Anyway, here's our triangle, I guess we're looking at the back of it because its winding order is clockwise...

\ 6 /

C 1

+

+ + /

7 2,3 A 5

+ + \

+

B 4

/ 8 \

Point A is the most recently added point, and points B and C are the input edge (ie A and B in the previous posting).

Although there are eight Voronoi Regions for a Triangle, for this algorithm we can safely ignore regions 6, 7 and 8... if BC are the points of our old edge, and A was selected in the direction of the origin, then the origin CANNOT be behind the BC edge, its somewhere towards A.

So let's see the algorithm which determines which of the five potential regions the origin is within... note that 'x' is a CrossProduct operator, and pairs of letters indicate EDGES ie AB is EDGE AB = B - A ok ?

vABC = AB x AC ;Plane Normal

IF Dot(vABC x AC) , AO) > 0 ;is the origin above edge AC?

IF Dot(AC,AO) > 0 ;is the origin between A and C?

[] = ;region 1

D = AC x AO x AC

ELSE ;origin is in region 4 or 5

GOTO CASEY

ENDIF

ELSE ;origin is below edge AC

IF Dot((AB x vABC) , A0) > 0 ;if origin is in region 4 or 5

GOTO CASEY

ELSE ;origin is in region 2 or 3

IF Dot(vABC, AO) > 0 ;is origin in front of triangle

[] = ;region 2

D = vABC

ELSE

[] = ;region 3

D = -vABC ;note sign

ENDIF

ENDIF

CASEY:

IF (Dot(AB, AO) > 0 ;is origin between A and B

[] = ;region 4

D = AB x AO x AB

ELSE

[] = A ;region 5

D = AO

ENDIF

Just like last time, we're performing dotproducts to find out which region the origin is within, and we're using crossproducts to generate new directions perpendicular to existing edges.

In my next post, we'll look at the handler for tetrahedrons, then we will have seen the complete GJK algorithm implemented in pseudocode, and hopefully we've learned enough to implement it in our language of choice :)

I'm going to draw it the same way that Casey did, however it should be mentioned that the conventional winding order for triangles is COUNTER clockwise... this will matter when we need to know which is the 'front' of the triangle and which is the 'back' of it.

Anyway, here's our triangle, I guess we're looking at the back of it because its winding order is clockwise...

\ 6 /

C 1

+

+ + /

7 2,3 A 5

+ + \

+

B 4

/ 8 \

Point A is the most recently added point, and points B and C are the input edge (ie A and B in the previous posting).

Although there are eight Voronoi Regions for a Triangle, for this algorithm we can safely ignore regions 6, 7 and 8... if BC are the points of our old edge, and A was selected in the direction of the origin, then the origin CANNOT be behind the BC edge, its somewhere towards A.

So let's see the algorithm which determines which of the five potential regions the origin is within... note that 'x' is a CrossProduct operator, and pairs of letters indicate EDGES ie AB is EDGE AB = B - A ok ?

vABC = AB x AC ;Plane Normal

IF Dot(vABC x AC) , AO) > 0 ;is the origin above edge AC?

IF Dot(AC,AO) > 0 ;is the origin between A and C?

[] = ;region 1

D = AC x AO x AC

ELSE ;origin is in region 4 or 5

GOTO CASEY

ENDIF

ELSE ;origin is below edge AC

IF Dot((AB x vABC) , A0) > 0 ;if origin is in region 4 or 5

GOTO CASEY

ELSE ;origin is in region 2 or 3

IF Dot(vABC, AO) > 0 ;is origin in front of triangle

[] = ;region 2

D = vABC

ELSE

[] = ;region 3

D = -vABC ;note sign

ENDIF

ENDIF

CASEY:

IF (Dot(AB, AO) > 0 ;is origin between A and B

[] = ;region 4

D = AB x AO x AB

ELSE

[] = A ;region 5

D = AO

ENDIF

Just like last time, we're performing dotproducts to find out which region the origin is within, and we're using crossproducts to generate new directions perpendicular to existing edges.

In my next post, we'll look at the handler for tetrahedrons, then we will have seen the complete GJK algorithm implemented in pseudocode, and hopefully we've learned enough to implement it in our language of choice :)

OK, here's the test that Casey didn't provide..

Attached is an image which shows a tetrahedron.

The black triangle (BCD) at the bottom of the tetrahedron is the input triangle (simplex).

The red edges lead to a new point which is in the direction of the origin (from the triangle).

This construction has four triangular faces... one we've seen before, and three new ones.

They are ABC, ACD and ADB. We will use some point/plane tests to determine which region the origin lays within.

Yet again, there are subspaces where the origin simply cannot be.

We know that it can't be below triangle BCD because we already tested that.

So we only need to perform tests against the remaining three planes, ie, the new planes of the new triangles.

So far we've only been interested in the Surface Normal of a triangle.

This tells us the direction which the Plane is facing, but does not tell us where in space the Plane is located.

In order to find the Plane of a triangle We need to know the Plane Equation:

AX + BY + CZ + D = 0

ABC are the points of the triangle, and D is the distance from world origin to the plane.

Hey, D is exactly what we need .. Plane Distance to origin... we won't need to actually test point versus plane, we already have what we need.

We just need to rearrange it to get:

D = AX + BY + CZ , where ABC are points of a triangle, and XYZ is the normal of the triangle.

If the sign of D is negative, then the origin is behind this triangle, and so potentially inside the tetrahedron.

But if the sign is positive, then the origin is outside the tetrahedron, we can return FALSE.

We perform this test apon all three new triangles, and if the origin is behind them all, then it is inside the tetrahedron.

If we keep the D values for these three tests, and find the D value for our OLD triangle, we can now determine which side of the tetrahedron the origin is closest to, which tells us the direction AND the depth of the penetration.

We can return TRUE, and we can return the penetration data, which we could use to perform a Penetration Correction that should bring our Bodies to the time of impact (TOI) without needing to search the timestep for it.... or at least to predict the TOI.

Attached is an image which shows a tetrahedron.

The black triangle (BCD) at the bottom of the tetrahedron is the input triangle (simplex).

The red edges lead to a new point which is in the direction of the origin (from the triangle).

This construction has four triangular faces... one we've seen before, and three new ones.

They are ABC, ACD and ADB. We will use some point/plane tests to determine which region the origin lays within.

Yet again, there are subspaces where the origin simply cannot be.

We know that it can't be below triangle BCD because we already tested that.

So we only need to perform tests against the remaining three planes, ie, the new planes of the new triangles.

So far we've only been interested in the Surface Normal of a triangle.

This tells us the direction which the Plane is facing, but does not tell us where in space the Plane is located.

In order to find the Plane of a triangle We need to know the Plane Equation:

AX + BY + CZ + D = 0

ABC are the points of the triangle, and D is the distance from world origin to the plane.

Hey, D is exactly what we need .. Plane Distance to origin... we won't need to actually test point versus plane, we already have what we need.

We just need to rearrange it to get:

D = AX + BY + CZ , where ABC are points of a triangle, and XYZ is the normal of the triangle.

If the sign of D is negative, then the origin is behind this triangle, and so potentially inside the tetrahedron.

But if the sign is positive, then the origin is outside the tetrahedron, we can return FALSE.

We perform this test apon all three new triangles, and if the origin is behind them all, then it is inside the tetrahedron.

If we keep the D values for these three tests, and find the D value for our OLD triangle, we can now determine which side of the tetrahedron the origin is closest to, which tells us the direction AND the depth of the penetration.

We can return TRUE, and we can return the penetration data, which we could use to perform a Penetration Correction that should bring our Bodies to the time of impact (TOI) without needing to search the timestep for it.... or at least to predict the TOI.

Now we have all the pseudocode to implement a GJK penetration detector which returns maximum penetration depth.

Rest assured I will post actual code for this, however I will forge on with these postings while I set about implementing and debugging the GJK code.

1. We integrated the system from time zero (T0) to the end of the timestep (T1).

2. We used a sphere sweep test to find the first (TS) and last (TE) contact times of the boundingspheres of body A and body B (TS).

We now know that the SPHERES first touch at TS, and they separate from each other at TS.

Generally, the timeline looks like this:

T0 ...... TS(..penetrating..)TE ...... T1

The time at which the BODIES first make contact (TOI) is somewhere between TS and TE.

Let's use a little intuition here. At the halfway point between TS and TE (we'll call it TM), the bodies are probably at their worst possible penetration, basically overlapping in space... so the TOI is probably somewhere between TS and that midpoint.

We could employ a binary search for the exact TOI which would integrate the bodies to various times while 'shrinking the search window', but let's get serious.

#1 - this is for games, we only aim for plausible results, we're not trying to simulate reality. #2 - our timestep is only one tenth of a second (by default), and a fraction of one tenth of a second is hardly worth worrying about.

So we could just assume that TOI = ((TM-TS)/2) + TS

That would place TOI halfway between TS and TM.

We can be pretty damned sure that the bodies will be penetrating at this time, and we can also be pretty damned sure that the penetration is shallow (the bodies are well within the first fifty percent of the overlapping region of the projection of their boundingspheres).

So, having found TOI quickly and within a reasonable accuracy, do we actually NEED a GJK collision test at all? After all, it can't actually tell us much about the COLLISION, it can only tell us about the PENETRATION, it can't tell us about multiple CONTACTS, just the maximum penetration depth and direction.

Well, we can still use it.

Given that we have a pretty good idea that the bodies are actually penetrating slightly at our estimated TOI, we can run one pass of our GJK, expecting it to always return TRUE (penetrating), and we can hand the returned max. penetration depth to our Penetration Correction mechanism.

My next post will discuss Penetration Correction.

Rest assured I will post actual code for this, however I will forge on with these postings while I set about implementing and debugging the GJK code.

1. We integrated the system from time zero (T0) to the end of the timestep (T1).

2. We used a sphere sweep test to find the first (TS) and last (TE) contact times of the boundingspheres of body A and body B (TS).

We now know that the SPHERES first touch at TS, and they separate from each other at TS.

Generally, the timeline looks like this:

T0 ...... TS(..penetrating..)TE ...... T1

The time at which the BODIES first make contact (TOI) is somewhere between TS and TE.

Let's use a little intuition here. At the halfway point between TS and TE (we'll call it TM), the bodies are probably at their worst possible penetration, basically overlapping in space... so the TOI is probably somewhere between TS and that midpoint.

We could employ a binary search for the exact TOI which would integrate the bodies to various times while 'shrinking the search window', but let's get serious.

#1 - this is for games, we only aim for plausible results, we're not trying to simulate reality. #2 - our timestep is only one tenth of a second (by default), and a fraction of one tenth of a second is hardly worth worrying about.

So we could just assume that TOI = ((TM-TS)/2) + TS

That would place TOI halfway between TS and TM.

We can be pretty damned sure that the bodies will be penetrating at this time, and we can also be pretty damned sure that the penetration is shallow (the bodies are well within the first fifty percent of the overlapping region of the projection of their boundingspheres).

So, having found TOI quickly and within a reasonable accuracy, do we actually NEED a GJK collision test at all? After all, it can't actually tell us much about the COLLISION, it can only tell us about the PENETRATION, it can't tell us about multiple CONTACTS, just the maximum penetration depth and direction.

Well, we can still use it.

Given that we have a pretty good idea that the bodies are actually penetrating slightly at our estimated TOI, we can run one pass of our GJK, expecting it to always return TRUE (penetrating), and we can hand the returned max. penetration depth to our Penetration Correction mechanism.

My next post will discuss Penetration Correction.

So, we used GJK to test for penetration at estimated TOI, which is the time that is 50% between TS and TM.

We found a penetration, and measured its depth, and its direction.

How can we correct the error in our TOI estimate?

We need to examine the relative motion of the two bodies.

For each body, we need to find out how much the linear and angular components of its motion contributed to the penetration.

I've already provided code for this, see the Simulator.Correct_Penetration method.

This method will 'wind the bodies backward', so that the Bodies are now 'just touching', and we can perform collision response.

This results in the pair of bodies being glitched very slightly forwards in time (with respect to the rest of the simulation), and we COULD estimate the time correction and adjust for it, but I'm not going to bother since I don't think the order of microcollisions is going to be a huge concern for gaming purposes..

We found a penetration, and measured its depth, and its direction.

How can we correct the error in our TOI estimate?

We need to examine the relative motion of the two bodies.

For each body, we need to find out how much the linear and angular components of its motion contributed to the penetration.

I've already provided code for this, see the Simulator.Correct_Penetration method.

This method will 'wind the bodies backward', so that the Bodies are now 'just touching', and we can perform collision response.

This results in the pair of bodies being glitched very slightly forwards in time (with respect to the rest of the simulation), and we COULD estimate the time correction and adjust for it, but I'm not going to bother since I don't think the order of microcollisions is going to be a huge concern for gaming purposes..

So, we have integrated our bodies to the estimated time of impact ...Now we need to check the ACTUAL GEOMETRY for collision, and if they are colliding, we want to generate one or more CONTACTS, where a Contact is a point of collision on both bodies, and a Normal which describes the Direction which will be used to separate the bodies at this contact point.

I'm going to restrict my discussion to Oriented Bounding Boxes for now.

There's a number of ways we could determine the set of contacts for a pair of boxes, and the most obvious is to determine which Points of each box are inside the space occupied by the other box... however, this method is inefficient because we're testing all points of each box against all planes of the other, which is 8 x 6 x 2 = 96 dotproducts, and we'll completely miss 'edge/edge' collisions!!

Another option is to test all Edges of each body against all Planes of the other body, which yields 12 x 6 x 2 = 144 dotproducts. Now we've caught the 'edge/edge' collisions, but we're performing even more tests than before!!

Obviously, we want to perform as few tests as possible, so how can we optimize this?

The answer is to determine the CLOSEST PAIR OF FEATURES from each Body, where a Feature is a Point, an Edge, or a Face.

So, given that our Primary Goal is to find the closest pair of features, then our Secondary Goal is to formulate an algorithm which works for arbitrary geometry rather than just for boxes - we'll just keep that in mind as we write our method for a pair of Boxes, so that we don't have too much trouble supporting other shapes.

In my next post, we'll take a look at how we might find the closest features for a pair of boxes, and how to use this knowledge to determine a set of contacts.

Request For Feedback:

Do you enjoy reading this thread? Is it too complex? Too simple?

Do you find this thread educational? Enlightening? Inspiring? Boring?

Do you feel I cover each topic thoroughly? Do I gloss over some things? Have I completely failed to mention anything?

Please let me know as I will attempt to address any shortcomings.

I'm going to restrict my discussion to Oriented Bounding Boxes for now.

There's a number of ways we could determine the set of contacts for a pair of boxes, and the most obvious is to determine which Points of each box are inside the space occupied by the other box... however, this method is inefficient because we're testing all points of each box against all planes of the other, which is 8 x 6 x 2 = 96 dotproducts, and we'll completely miss 'edge/edge' collisions!!

Another option is to test all Edges of each body against all Planes of the other body, which yields 12 x 6 x 2 = 144 dotproducts. Now we've caught the 'edge/edge' collisions, but we're performing even more tests than before!!

Obviously, we want to perform as few tests as possible, so how can we optimize this?

The answer is to determine the CLOSEST PAIR OF FEATURES from each Body, where a Feature is a Point, an Edge, or a Face.

So, given that our Primary Goal is to find the closest pair of features, then our Secondary Goal is to formulate an algorithm which works for arbitrary geometry rather than just for boxes - we'll just keep that in mind as we write our method for a pair of Boxes, so that we don't have too much trouble supporting other shapes.

In my next post, we'll take a look at how we might find the closest features for a pair of boxes, and how to use this knowledge to determine a set of contacts.

Request For Feedback:

Do you enjoy reading this thread? Is it too complex? Too simple?

Do you find this thread educational? Enlightening? Inspiring? Boring?

Do you feel I cover each topic thoroughly? Do I gloss over some things? Have I completely failed to mention anything?

Please let me know as I will attempt to address any shortcomings.

There's already a well-known algorithm for doing this, it's called the Lin-Canny Algorithm, and it works by testing the distance between all possible pairs of features apon a pair of bodies... this is simply a bruteforce approach, and we've already established that bruteforce is not the way we want to go... it's too slow, too costly.

So again I ask you, how can we optimize this?

Sometimes when a problem seems too difficult to solve, it can be useful to redefine the problem.

So let's take a look at the image I've attached... scary, isn't it? The image is showing the Voronoi Regions around a 3D box... unfortunately, we can't see the whole box since its just a 2D image, but we can imagine the Voronoi regions that we can't see. What matters most is that we realize that each Voronoi Region is associated with exactly one Feature (point, edge or face)... if we can find the Voronoi Region (VR) which encloses a test point, then we have found the closest feature to that test point.

We also already established that Voronoi Regions are not as scary as they appear.

Given that the Box is constructed from eight points, six faces and twelve edges, and there is one VR for each feature, that means there must be 26 VR's in total... that's just for this box.

If we were to test all possible combinations of VR's on TWO boxes, we would have 26^2 tests to perform, thats 626 tests, this SOUNDS like the worst method we've encountered so far!

I'm going to show you how we can cut that down to about 6 tests per box, by using the same technique we saw in the GJK simplex resolver.. this solution assumes that the two boxes are within the first 50 percent of the overlapping region of their paths (ie, the penetration, if any, is not extremely deep).

For each box, we'll use planes to cut away the subspaces that are 'not interesting' until we're left with the subspace that contains the origin of the other box. This will yield exactly one Feature for each Box, which are the closest pair of features for this pair of boxes.

In order to understand this better we'll go through the solution for a 2D box, and then build on that to create a 3D solution.

I'll go away and create the 2D art to accompany this, meanwhile I ask you to think about how we might solve it :)

So again I ask you, how can we optimize this?

Sometimes when a problem seems too difficult to solve, it can be useful to redefine the problem.

So let's take a look at the image I've attached... scary, isn't it? The image is showing the Voronoi Regions around a 3D box... unfortunately, we can't see the whole box since its just a 2D image, but we can imagine the Voronoi regions that we can't see. What matters most is that we realize that each Voronoi Region is associated with exactly one Feature (point, edge or face)... if we can find the Voronoi Region (VR) which encloses a test point, then we have found the closest feature to that test point.

We also already established that Voronoi Regions are not as scary as they appear.

Given that the Box is constructed from eight points, six faces and twelve edges, and there is one VR for each feature, that means there must be 26 VR's in total... that's just for this box.

If we were to test all possible combinations of VR's on TWO boxes, we would have 26^2 tests to perform, thats 626 tests, this SOUNDS like the worst method we've encountered so far!

I'm going to show you how we can cut that down to about 6 tests per box, by using the same technique we saw in the GJK simplex resolver.. this solution assumes that the two boxes are within the first 50 percent of the overlapping region of their paths (ie, the penetration, if any, is not extremely deep).

For each box, we'll use planes to cut away the subspaces that are 'not interesting' until we're left with the subspace that contains the origin of the other box. This will yield exactly one Feature for each Box, which are the closest pair of features for this pair of boxes.

In order to understand this better we'll go through the solution for a 2D box, and then build on that to create a 3D solution.

I'll go away and create the 2D art to accompany this, meanwhile I ask you to think about how we might solve it :)

Look at the attached images to understand the following random dribble.

Note that it is expressed in 'Box Space'.

We have a 2D Box which is intersecting some other body.

If we were to extend all the edges of the box, we'd get a grid of 3x3 cells, with the box occupying the middle cell, and the other eight representing voronoi regions for the corners and faces of the box... we wish to know which of the nine regions the ORIGIN of the OTHER body (our test point) is located within..

We begin by creating a vector from the origin of the Box to the origin of the other body.

We examine the components of this vector to determine which Axis has the greatest magnitude, in this example it is the X axis, and the Direction of X is Positive.

We then obtain A = Supporting Face in the +X Direction. I'll explain this more below.

We perform a Point/Plane test using the Plane of A and the "test point".

This carves off all of the VR's except for a few..

We now repeat this process for the remaining axis (or axes if 3D) of greatest magnitude.

Since the example is only 2D, we see that the second round of the algorithm will be performed apon the Y axis.

The final image shows that the top face of the box was selected (by finding the support face in the +Y Direction).

Our 2D example ends here because we've identified the voronoi region containing the test point, its the top right one, which is associated with the top right point of the box... thats our closest feature.

But if the point had been below the Plane during this test instead of above it, we would have been forced to perform a third test against the Support face in the Minus Y direction.

Note that it is expressed in 'Box Space'.

We have a 2D Box which is intersecting some other body.

If we were to extend all the edges of the box, we'd get a grid of 3x3 cells, with the box occupying the middle cell, and the other eight representing voronoi regions for the corners and faces of the box... we wish to know which of the nine regions the ORIGIN of the OTHER body (our test point) is located within..

We begin by creating a vector from the origin of the Box to the origin of the other body.

We examine the components of this vector to determine which Axis has the greatest magnitude, in this example it is the X axis, and the Direction of X is Positive.

We then obtain A = Supporting Face in the +X Direction. I'll explain this more below.

We perform a Point/Plane test using the Plane of A and the "test point".

This carves off all of the VR's except for a few..

We now repeat this process for the remaining axis (or axes if 3D) of greatest magnitude.

Since the example is only 2D, we see that the second round of the algorithm will be performed apon the Y axis.

The final image shows that the top face of the box was selected (by finding the support face in the +Y Direction).

Our 2D example ends here because we've identified the voronoi region containing the test point, its the top right one, which is associated with the top right point of the box... thats our closest feature.

But if the point had been below the Plane during this test instead of above it, we would have been forced to perform a third test against the Support face in the Minus Y direction.

The images in my previous post may have reminded you of the process of constructing a BSP Tree.

It is becoming common to generate a Tree which encodes the geometry of an input 3D model, and then using that tree to accelerate our collision detection. Several common trees being used for this include AABB, OBB and Sphere trees. I have decided to work with Sphere trees because spheres are rotationally invariant (as opposed to OBB's), meaning we eliminate rotational transformations when working with these trees, while requiring only a single test for intersection (as opposed to AABB's which need to perform 6 signed axial tests). A major advantage of using trees for collision testing is that we're no longer restricted to convex polytopes - we can use complex (or even illegal) 3D shapes of any kind.

Generating a sphere tree from an input 'triangle soup' can be done in various ways, which all fall under two categories: "top down", and "bottom up"... either way, the goal is to construct a Tree whose 'leaf' nodes contain just one triangle each.

Sphere trees can be generated using any kind of spatial partitioning algorithm, for example we can build an Octree of BoundingBoxes, and convert all the nodes into Spheres when we're finished.

I have decided to use the BSP Tree algorithm as the basis for construction of my Sphere Tree.

The goal of BSP construction is to find a Plane which cuts the input set of triangles into two child sets which contain similar numbers of triangles, while 'cutting' as few triangles as possible.

Most BSP generators will restrict the selection of Plane to the set of Planes represented by the input triangles, and will terminate construction of the tree when the number of triangles in a node is too low, or when the set of triangles forms a CONVEX SET.

This is not sufficient for us, we want to continue until each leaf contains a single triangle, and in the case where a suitable cutting plane cannot be found by conventional means, we'll need another means to generate our child nodes.

It is certainly worth noting that our trees are constructed in BODYSPACE... so when we need to perform collisiontesting in WorldSpace, we'll need to transform spheres from body B into the space of Body A.

Our broadphase (coarse) collision testing has already identified pairs of bodies whose boundingspheres are intersecting. So we know that the spheres at the ROOT NODES of those sphere trees are intersecting.

In order to perform narrowphase collision testing between bodies A and B, and given that we are working in the BodySpace of body A, our algorithm needs to walk both trees at once, with tree A as our 'major' tree, and tree B as our 'minor' tree.. we will walk the nodes of B's tree, transforming their spheres into A's space, and we will test them for intersection against our current node in A's tree.

As long as we find intersection, we will walk the two trees, eventually finding pairs of leaf nodes whose spheres intersect.

These leaf nodes contain a single triangle, so we have detected pairs of triangles which MAY intersect...

Now we have to perform a triangle/triangle intersection test, which may generate intersection points.

Finally, we emit a Contact at each intersection point.

In this way we are able to build a full set of contacts, aka a Contact Manifold, between two arbitrary bodies.

Now we can pass the Contact Manifold to our existing collision response function ;)

In my next post, I'll describe an an alternative sub-algorithm for the cases where standard BSP fails to identify a suitable cutting plane, and we'll lay out the complete algorithm for generating a binary sphere tree from a triangle soup :)

It is becoming common to generate a Tree which encodes the geometry of an input 3D model, and then using that tree to accelerate our collision detection. Several common trees being used for this include AABB, OBB and Sphere trees. I have decided to work with Sphere trees because spheres are rotationally invariant (as opposed to OBB's), meaning we eliminate rotational transformations when working with these trees, while requiring only a single test for intersection (as opposed to AABB's which need to perform 6 signed axial tests). A major advantage of using trees for collision testing is that we're no longer restricted to convex polytopes - we can use complex (or even illegal) 3D shapes of any kind.

Generating a sphere tree from an input 'triangle soup' can be done in various ways, which all fall under two categories: "top down", and "bottom up"... either way, the goal is to construct a Tree whose 'leaf' nodes contain just one triangle each.

Sphere trees can be generated using any kind of spatial partitioning algorithm, for example we can build an Octree of BoundingBoxes, and convert all the nodes into Spheres when we're finished.

I have decided to use the BSP Tree algorithm as the basis for construction of my Sphere Tree.

The goal of BSP construction is to find a Plane which cuts the input set of triangles into two child sets which contain similar numbers of triangles, while 'cutting' as few triangles as possible.

Most BSP generators will restrict the selection of Plane to the set of Planes represented by the input triangles, and will terminate construction of the tree when the number of triangles in a node is too low, or when the set of triangles forms a CONVEX SET.

This is not sufficient for us, we want to continue until each leaf contains a single triangle, and in the case where a suitable cutting plane cannot be found by conventional means, we'll need another means to generate our child nodes.

It is certainly worth noting that our trees are constructed in BODYSPACE... so when we need to perform collisiontesting in WorldSpace, we'll need to transform spheres from body B into the space of Body A.

Our broadphase (coarse) collision testing has already identified pairs of bodies whose boundingspheres are intersecting. So we know that the spheres at the ROOT NODES of those sphere trees are intersecting.

In order to perform narrowphase collision testing between bodies A and B, and given that we are working in the BodySpace of body A, our algorithm needs to walk both trees at once, with tree A as our 'major' tree, and tree B as our 'minor' tree.. we will walk the nodes of B's tree, transforming their spheres into A's space, and we will test them for intersection against our current node in A's tree.

As long as we find intersection, we will walk the two trees, eventually finding pairs of leaf nodes whose spheres intersect.

These leaf nodes contain a single triangle, so we have detected pairs of triangles which MAY intersect...

Now we have to perform a triangle/triangle intersection test, which may generate intersection points.

Finally, we emit a Contact at each intersection point.

In this way we are able to build a full set of contacts, aka a Contact Manifold, between two arbitrary bodies.

Now we can pass the Contact Manifold to our existing collision response function ;)

In my next post, I'll describe an an alternative sub-algorithm for the cases where standard BSP fails to identify a suitable cutting plane, and we'll lay out the complete algorithm for generating a binary sphere tree from a triangle soup :)

BSP's 'cuttingplane selection' subalgorithm can fail under two circumstances.

The first is that the set of input triangles is completely convex (faces all point outwards) or completely concave (faces all point inwards)... for regular BSP, these are both terminal cases since they both identify a closed subspace (regardless of whether we mean 'all inside', or 'all outside').

The second is that there's only two faces to choose from.

The second case is easy, we just declare each of these triangles as a leafnode in our sphere tree.

But the former case requires more work.

What we'll do is generate all the possible planes that we can construct from a combination of three points, and test all of them to determine the optimal cutting plane using our existing bsptree plane evaluating function.

Given the example of a convex body such as a 3D box, it is clear that none of the faces have a plane which evenly divides the triangles, but we can imagine a diagonal plane which cuts the triangles in half while splitting ZERO triangles (the plane passes through the box diagonally, not cutting any faces).

I'm not particularly interested in optimizing this because it's a PREPROCESSING STEP, it doesn't happen at runtime so the time it takes is not critical, the amount of analyzing is more important because it means LESS analyzing is done at runtime - make sense?

Now we can begin to lay out the complete algorithm for generating our sphere tree.

I'm pretty certain this algorithm is already in use so I won't give it a name, please tell me what it's called if you recognize it !!

The first is that the set of input triangles is completely convex (faces all point outwards) or completely concave (faces all point inwards)... for regular BSP, these are both terminal cases since they both identify a closed subspace (regardless of whether we mean 'all inside', or 'all outside').

The second is that there's only two faces to choose from.

The second case is easy, we just declare each of these triangles as a leafnode in our sphere tree.

But the former case requires more work.

What we'll do is generate all the possible planes that we can construct from a combination of three points, and test all of them to determine the optimal cutting plane using our existing bsptree plane evaluating function.

Given the example of a convex body such as a 3D box, it is clear that none of the faces have a plane which evenly divides the triangles, but we can imagine a diagonal plane which cuts the triangles in half while splitting ZERO triangles (the plane passes through the box diagonally, not cutting any faces).

I'm not particularly interested in optimizing this because it's a PREPROCESSING STEP, it doesn't happen at runtime so the time it takes is not critical, the amount of analyzing is more important because it means LESS analyzing is done at runtime - make sense?

Now we can begin to lay out the complete algorithm for generating our sphere tree.

I'm pretty certain this algorithm is already in use so I won't give it a name, please tell me what it's called if you recognize it !!

Here is the initializing work we must do before generating our tree:

-Associate each Triangle with its Plane

-Create a Root Node with an empty list of Triangles

-Shove all Triangles into the Root Node

-Call recursive function to generate tree

Here is the recursive algorithm for generating the tree:

-From the input set of triangles, find the triangle whose Plane best divides the remaining triangles.

-Divide the triangles into two child subsets based on which side of the plane they fall within. If a triangle is coplanar, or is cut by the dividing plane, send it to both output lists.

-Place each subset of triangles into a child node.. if the child node contains a single triangle, then it is a Leaf node and we dont need to recurse it... otherwise,

-Recurse each child node until they contain a single triangle, or we reach a special case.

-Calculate the boundingsphere for the current node as the last step before returning to our caller.

The special cases have already been defined, as has the algorithm we will execute in these cases.

It is worth mentioning that we don't need the 'complete' BSP implementation because we NEVER SPLIT TRIANGLES - we end up with a tree where we are guaranteed that each Leaf contains a single triangle, although there may be more than one path which leads to that triangle (ie, a triangle may exist in more than one leaf node).

We're now ready to see some actual code for implementing this algorithm!

-Associate each Triangle with its Plane

-Create a Root Node with an empty list of Triangles

-Shove all Triangles into the Root Node

-Call recursive function to generate tree

Here is the recursive algorithm for generating the tree:

-From the input set of triangles, find the triangle whose Plane best divides the remaining triangles.

-Divide the triangles into two child subsets based on which side of the plane they fall within. If a triangle is coplanar, or is cut by the dividing plane, send it to both output lists.

-Place each subset of triangles into a child node.. if the child node contains a single triangle, then it is a Leaf node and we dont need to recurse it... otherwise,

-Recurse each child node until they contain a single triangle, or we reach a special case.

-Calculate the boundingsphere for the current node as the last step before returning to our caller.

The special cases have already been defined, as has the algorithm we will execute in these cases.

It is worth mentioning that we don't need the 'complete' BSP implementation because we NEVER SPLIT TRIANGLES - we end up with a tree where we are guaranteed that each Leaf contains a single triangle, although there may be more than one path which leads to that triangle (ie, a triangle may exist in more than one leaf node).

We're now ready to see some actual code for implementing this algorithm!

Just before we dive into code, it would be good to understand how we can use these Sphere Trees to accelerate our narrowphase collision detection :P

Our primary goal is to find pairs of Leaf Nodes (from two input trees) whose spheres are intersecting, and thus identify and test pairs of proximate Triangles for intersection.

Our secondary goal is to traverse the two Trees as efficiently as possible, thus performing as few tests as possible.

Our broadphase collision detector finds pairs of bodies whose boundingspheres intersect.

We can therefore safely assume the following:

For any pair of bodies which is handed to the narrowphase test, the Root BoundingSpheres of their Sphere Trees must be intersecting... so we don't need to test this pair of spheres.

We need a recursive function which attempts to walk BOTH TREES AT ONCE for as long as this is possible, thus eliminating as many BRANCH PAIRS as possible, as early as possible.

The recursive function looks something like this..

Given current nodes of Bodies A and B:

If current node of BodyA is NOT A LEAF,

If current node of BodyB is NOT A LEAF,

Recurse A.Front, B.Front

Recurse A.Front, B.Back

Recurse A.Back,B.Front

Recurse A.Back,B.Back

Else

Recurse A.Front, B

Recurse A.Back, B

EndIf

ElseIf current node of BodyB is NOT A LEAF,

Recurse A,B.Front

Recurse A,B.Back

Else

Test A.Triangle, B.Triangle

EndIf

It works like this: If current node on both trees are non leaf, walk both trees... if one node is a non leaf, walk that tree... if both nodes are leaves (contain a single triangle), test for intersection and generate contacts for intersection case.

The tricky part which I haven't mentioned is that each of these trees is valid only in the BodySpace of its owner - so we need to choose a common space for the actual tests, and transform the test geometry into that space (I choose to test in the space of Body A).

Our primary goal is to find pairs of Leaf Nodes (from two input trees) whose spheres are intersecting, and thus identify and test pairs of proximate Triangles for intersection.

Our secondary goal is to traverse the two Trees as efficiently as possible, thus performing as few tests as possible.

Our broadphase collision detector finds pairs of bodies whose boundingspheres intersect.

We can therefore safely assume the following:

For any pair of bodies which is handed to the narrowphase test, the Root BoundingSpheres of their Sphere Trees must be intersecting... so we don't need to test this pair of spheres.

We need a recursive function which attempts to walk BOTH TREES AT ONCE for as long as this is possible, thus eliminating as many BRANCH PAIRS as possible, as early as possible.

The recursive function looks something like this..

Given current nodes of Bodies A and B:

If current node of BodyA is NOT A LEAF,

If current node of BodyB is NOT A LEAF,

Recurse A.Front, B.Front

Recurse A.Front, B.Back

Recurse A.Back,B.Front

Recurse A.Back,B.Back

Else

Recurse A.Front, B

Recurse A.Back, B

EndIf

ElseIf current node of BodyB is NOT A LEAF,

Recurse A,B.Front

Recurse A,B.Back

Else

Test A.Triangle, B.Triangle

EndIf

It works like this: If current node on both trees are non leaf, walk both trees... if one node is a non leaf, walk that tree... if both nodes are leaves (contain a single triangle), test for intersection and generate contacts for intersection case.

The tricky part which I haven't mentioned is that each of these trees is valid only in the BodySpace of its owner - so we need to choose a common space for the actual tests, and transform the test geometry into that space (I choose to test in the space of Body A).

There's just one problem left to tackle.

The BSP Tree generating algorithm recursively divides the input 'triangle soup' into a binary tree whose leaf nodes contain 'convex sets' of triangles.

In this context, 'convex' means 'a set of triangles whose surface normals (planes) all point inwards, or all point outwards, with respect to each other'. There is no plane (in the limited set of planes we can extract from the input triangles) which can be used to divide a convex set... and we want our leaf nodes to contain no more than one triangle each, so when our BSP generator 'discovers' a convex set, we need to switch to a new recursive algorithm in order to continue bisecting that convex set.

I've devised my own algorithm for this, however it's probably not new.

It's a 'top down' algorithm like BSP... at first it might sound like 'a recursion within a recursion' but it is really just a natural extension to the existing BSP generator's recursion. Basically, it works by grouping the triangles according to their proximity to two expanding spheres.

The recursive function for handling convex clusters of triangles in the leaf nodes of a BSP-based tree is as follows:

First, I find the two triangles whose Centroids are furthest apart, and I move each of them into a new collection, lets call them List_A and List_B.

Now I process the remaining triangles with this algorithm:

-While the number of input triangles is greater than zero

--Find the extents and origin of List_A

--Find the triangle whose centroid is closest to List_A

--Move this triangle into List_A

--If the number of input triangles is still greater than zero

----Find the extents and origin of List_B

----Find the triangle whose centroid is closest to List_B

----Move this triangle into List_B

--EndIf

-EndWhile

Now I move the two Lists into two new Child nodes, and I recurse any Child nodes which contain at least 2 triangles.

The BSP Tree generating algorithm recursively divides the input 'triangle soup' into a binary tree whose leaf nodes contain 'convex sets' of triangles.

In this context, 'convex' means 'a set of triangles whose surface normals (planes) all point inwards, or all point outwards, with respect to each other'. There is no plane (in the limited set of planes we can extract from the input triangles) which can be used to divide a convex set... and we want our leaf nodes to contain no more than one triangle each, so when our BSP generator 'discovers' a convex set, we need to switch to a new recursive algorithm in order to continue bisecting that convex set.

I've devised my own algorithm for this, however it's probably not new.

It's a 'top down' algorithm like BSP... at first it might sound like 'a recursion within a recursion' but it is really just a natural extension to the existing BSP generator's recursion. Basically, it works by grouping the triangles according to their proximity to two expanding spheres.

The recursive function for handling convex clusters of triangles in the leaf nodes of a BSP-based tree is as follows:

First, I find the two triangles whose Centroids are furthest apart, and I move each of them into a new collection, lets call them List_A and List_B.

Now I process the remaining triangles with this algorithm:

-While the number of input triangles is greater than zero

--Find the extents and origin of List_A

--Find the triangle whose centroid is closest to List_A

--Move this triangle into List_A

--If the number of input triangles is still greater than zero

----Find the extents and origin of List_B

----Find the triangle whose centroid is closest to List_B

----Move this triangle into List_B

--EndIf

-EndWhile

Now I move the two Lists into two new Child nodes, and I recurse any Child nodes which contain at least 2 triangles.

Together we've reached a milestone of well over 4000 views in three months - a personal record, and proof that this board is alive and well :)

Attached is my current version of D3D_CollisionMesh.

Derived from my CollisionBody class, it's an extended version of OA32's D3D_Mesh object.

It implements the algorithms I've posted for generating a Sphere Tree from arbitrary input geometry via 'Extended BSP Tree Generator Algorithm' (I still haven't found this algorithm published elsewhere, however I still strongly suspect that this particular hybrid algorithm is not new).

Like D3D_Mesh, the class I've provided is designed to act as a 'reference object' from which many 'living instance' can be created... D3D_PhysicalEntity is the class which implements such instances from an existing D3D_CollisionMesh.

Now we have baked a nice cake, it's time for the frosting.

In my next post, I'll provide collision detection code for pairs of D3D_PhysicalEntities, based apon the previously posted algorithm for dual tree-walking.

If you have any questions at all, please feel free to ask... either in this thread, or via Private Message.

Attached is my current version of D3D_CollisionMesh.

Derived from my CollisionBody class, it's an extended version of OA32's D3D_Mesh object.

It implements the algorithms I've posted for generating a Sphere Tree from arbitrary input geometry via 'Extended BSP Tree Generator Algorithm' (I still haven't found this algorithm published elsewhere, however I still strongly suspect that this particular hybrid algorithm is not new).

Like D3D_Mesh, the class I've provided is designed to act as a 'reference object' from which many 'living instance' can be created... D3D_PhysicalEntity is the class which implements such instances from an existing D3D_CollisionMesh.

Now we have baked a nice cake, it's time for the frosting.

In my next post, I'll provide collision detection code for pairs of D3D_PhysicalEntities, based apon the previously posted algorithm for dual tree-walking.

If you have any questions at all, please feel free to ask... either in this thread, or via Private Message.

(...)and proof that this board is alive and well :)

...And proof that you can be a lecturer or at least a tutor! ^^