Hi Homer,

With all the files I was able to build the project, and it's running

But I still don't understand how you can use atofp.asm in an unchanged form.

Because when I don't remove the "END" directive from it, the linker produces the complaint "unresolved external symbol _WinMainCRTStartup" I mentioned earlier.

Only if I remove it I am able to build the project.

What strikes me when I look at the BoneBoxes is the following:

They appear to be longer than the mesh they encompass.

Look at the upperarm, when Tiny is waving.

When the lower arm has an angle of about 90 degrees, the bonebox around the upperarm is much longer than the upperarm itself. It is sticking out.

And the same is true for all other boxes.

I guess this is what you ment by the following:

Does this have a special purpose? Why aren't they exactly the same size as the maximum width, height and depth of the mesh belonging to the bones? Why would we want to expand the boxes?

Friendly regards,

mdevries.

With all the files I was able to build the project, and it's running

But I still don't understand how you can use atofp.asm in an unchanged form.

Because when I don't remove the "END" directive from it, the linker produces the complaint "unresolved external symbol _WinMainCRTStartup" I mentioned earlier.

Only if I remove it I am able to build the project.

What strikes me when I look at the BoneBoxes is the following:

They appear to be longer than the mesh they encompass.

Look at the upperarm, when Tiny is waving.

When the lower arm has an angle of about 90 degrees, the bonebox around the upperarm is much longer than the upperarm itself. It is sticking out.

And the same is true for all other boxes.

I guess this is what you ment by the following:

Basically I need to create a BoundingBox for each Bone.

The orientation of each box is that of the Bone it encompasses.

To be more precise, for each Bone, find the set of vertices affected by this Bone, and find the boundingbox of the set of affected vertices. Then expand the box to include its connection points with other Bones if necessary.

The orientation of each box is that of the Bone it encompasses.

To be more precise, for each Bone, find the set of vertices affected by this Bone, and find the boundingbox of the set of affected vertices. Then expand the box to include its connection points with other Bones if necessary.

Does this have a special purpose? Why aren't they exactly the same size as the maximum width, height and depth of the mesh belonging to the bones? Why would we want to expand the boxes?

Friendly regards,

mdevries.

Yeah, I agree, the boxes don't seem to be as great a fit as they could be.

Maybe this is due to numerical error introduced when we transform the box points from bonespace to modelspace?

My code does what I described - it finds the min,max coordinates of the boundingbox by examining the affected mesh vertices (and since this model is smooth-skinned, some vertices are affected by more than one bone, and so our boxes overlap, which is fine).

Maybe my math for checking the relative positions of child bones is wrong, causing the boxes to be expanded more than necessary, but the code appears ok to me..

Anyway, it's pretty close to what I imagined it would look like, and the box hull doesn't need to be a perfect fit, especially if we use the box hull as only a preliminary hit-detection test, and perform secondary detection with the subset of faces implied by a subset of affected mesh vertices..

We don't really need to expand the boxes, we're just making sure that for a given box, the points where it joins its parent and child(s) are all inside the box.

The box is represented by a frame, its a frame of reference.

Normally, these connector points will already fall inside the box without us intervening.

Why do they HAVE to be inside the box, why expand it if we have to do so in order to fulfil the above conditions?

I don't actually know why, it's probably not at all necessary for our implementation, but the whitepaper I'm basing this stuff on mentioned it, so I did it too.

My guess is that the author wanted to ensure that the boxes overlapped at the joints even where a joint is defined outside of the mesh - a perfectly legal strategy - there's no golden law which states that all bones/joints have to be INSIDE the mesh ;)

Here's another sore point.

The whitepaper handles boneboxes by their centers, even though the origin of a given box (eg for rotation) is in fact its connectionpoint with its parent (if any) which is offset some distance from the center of the box.

There's no reason why we have to do this, it just makes the physics code a bit trickier since our boxes are no longer rotating about their own center of mass, and thus we have to make some per-box-point physics calcs that we'd otherwise avoid..we wind up with a physics model that is somewhat less accurate in terms of rotational forces as a result.

Maybe this is due to numerical error introduced when we transform the box points from bonespace to modelspace?

My code does what I described - it finds the min,max coordinates of the boundingbox by examining the affected mesh vertices (and since this model is smooth-skinned, some vertices are affected by more than one bone, and so our boxes overlap, which is fine).

Maybe my math for checking the relative positions of child bones is wrong, causing the boxes to be expanded more than necessary, but the code appears ok to me..

Anyway, it's pretty close to what I imagined it would look like, and the box hull doesn't need to be a perfect fit, especially if we use the box hull as only a preliminary hit-detection test, and perform secondary detection with the subset of faces implied by a subset of affected mesh vertices..

We don't really need to expand the boxes, we're just making sure that for a given box, the points where it joins its parent and child(s) are all inside the box.

The box is represented by a frame, its a frame of reference.

Normally, these connector points will already fall inside the box without us intervening.

Why do they HAVE to be inside the box, why expand it if we have to do so in order to fulfil the above conditions?

I don't actually know why, it's probably not at all necessary for our implementation, but the whitepaper I'm basing this stuff on mentioned it, so I did it too.

My guess is that the author wanted to ensure that the boxes overlapped at the joints even where a joint is defined outside of the mesh - a perfectly legal strategy - there's no golden law which states that all bones/joints have to be INSIDE the mesh ;)

Here's another sore point.

The whitepaper handles boneboxes by their centers, even though the origin of a given box (eg for rotation) is in fact its connectionpoint with its parent (if any) which is offset some distance from the center of the box.

There's no reason why we have to do this, it just makes the physics code a bit trickier since our boxes are no longer rotating about their own center of mass, and thus we have to make some per-box-point physics calcs that we'd otherwise avoid..we wind up with a physics model that is somewhat less accurate in terms of rotational forces as a result.

When the lower arm has an angle of about 90 degrees, the bonebox around the upperarm is much longer than the upperarm itself. It is sticking out.

Probably because the vertices which would occupy that empty space at the elbow end of the upper arm have been bent around the corner, since they are partly affected by the lower arm ;)

In the BindPose, the model has its arms sticking out horizontally, with no bend in the arm.

This model is smooth-skinned, not rigid-skinned..

If the model was rigidly skinned, each mesh vertex would be affected by exactly one bone, and the bends would be sharper, and the boxes would overlap less or not at all.

Hi Homer,

The whitepaper you're basing your code on, is it J.Adam's book "Advanced Animation with DirectX 9.0" you provided a link for in an earlier post? Or do you mean an other source? And if so, is it availlable on the web?

Friendly regards,

mdevries.

Why do they HAVE to be inside the box, why expand it if we have to do so in order to fulfil the above conditions?

I don't actually know why, it's probably not at all necessary for our implementation, but the whitepaper I'm basing this stuff on mentioned it, so I did it too.

I don't actually know why, it's probably not at all necessary for our implementation, but the whitepaper I'm basing this stuff on mentioned it, so I did it too.

The whitepaper you're basing your code on, is it J.Adam's book "Advanced Animation with DirectX 9.0" you provided a link for in an earlier post? Or do you mean an other source? And if so, is it availlable on the web?

Friendly regards,

mdevries.

Yes, it's that document I linked to earlier that I'm basing most of this stuff on.

There's little in the way of actual code contained in that document, so I'm basically winging it and using the document as a reference.

I've written other physics simulators so I'm pretty confident that even if I deviate a little from the system described in the document that I can get it working to my satisfaction.

In fact, I'm already overhauling the ragdoll bone structs as I begin to implement the code for initializing the physics system based on the prev and current animation frames, so expect a few changes..

What I have in mind:

I've spent some effort to get everything into "model space" for a reason..

Now I can extract my initial physics values in a common spatial context.

I probably should go further, and get everything into "world space", but I'll get it working first and come back to that when theres more stuff to collide with ;)

Anyway, now I can deduce my physics initial values, which is the hard bit.

Well, it doesn't have to be that hard,theres room to fudge things a little, but doing it wrong will mean the model will jerk harshly when we switch modes.

After that we can relax and just call the Integrate method to update the physics over time, which in turn leads to the boneframe.matCombined being updated.

This means that the standard rendering code is fine to use in ragdoll mode.

There's little in the way of actual code contained in that document, so I'm basically winging it and using the document as a reference.

I've written other physics simulators so I'm pretty confident that even if I deviate a little from the system described in the document that I can get it working to my satisfaction.

In fact, I'm already overhauling the ragdoll bone structs as I begin to implement the code for initializing the physics system based on the prev and current animation frames, so expect a few changes..

What I have in mind:

I've spent some effort to get everything into "model space" for a reason..

Now I can extract my initial physics values in a common spatial context.

I probably should go further, and get everything into "world space", but I'll get it working first and come back to that when theres more stuff to collide with ;)

Anyway, now I can deduce my physics initial values, which is the hard bit.

Well, it doesn't have to be that hard,theres room to fudge things a little, but doing it wrong will mean the model will jerk harshly when we switch modes.

After that we can relax and just call the Integrate method to update the physics over time, which in turn leads to the boneframe.matCombined being updated.

This means that the standard rendering code is fine to use in ragdoll mode.

Today I wanna talk about building the "inertia tensors".

We'll need the size of each Box as a Vec3, and the Mass as a float.

I'll talk about how to calculate the Mass in a separate post, its easy.

Take the Box's size in x,y and z and use them to obtain "moment of inerta scalars":

xs = vecSize.x * vecSize.x

ys = vecSize.y * vecSize.y

zs = vecSize.z * vecSize.z

Now take the moi scalars and the Mass and use them to obtain the axial moi values:

ixx = Mass * (ys + zs)

iyy = Mass * (xs + zs)

izz = Mass * (xs + ys)

Build your Moment of Inertia Tensor (a 3x3 matrix) as follows:

ixx 0 0

0 iyy 0

0 0 izz

Finally, we'll also need the inverse of that matrix, which is:

1/ixx 0 0

0 1/iyy 0

0 0 1/izz

We should build these tensors for each Box during the box generation phase, since its the kind of thing we do once, and since it means we won't need to keep the box size vector anymore, unnecessarily bloating our struct.

We'll need the size of each Box as a Vec3, and the Mass as a float.

I'll talk about how to calculate the Mass in a separate post, its easy.

Take the Box's size in x,y and z and use them to obtain "moment of inerta scalars":

xs = vecSize.x * vecSize.x

ys = vecSize.y * vecSize.y

zs = vecSize.z * vecSize.z

Now take the moi scalars and the Mass and use them to obtain the axial moi values:

ixx = Mass * (ys + zs)

iyy = Mass * (xs + zs)

izz = Mass * (xs + ys)

Build your Moment of Inertia Tensor (a 3x3 matrix) as follows:

ixx 0 0

0 iyy 0

0 0 izz

Finally, we'll also need the inverse of that matrix, which is:

1/ixx 0 0

0 1/iyy 0

0 0 1/izz

We should build these tensors for each Box during the box generation phase, since its the kind of thing we do once, and since it means we won't need to keep the box size vector anymore, unnecessarily bloating our struct.

I'm trying to decide which is the best way to determine the initial angular velocity.

Please correct me/feel free to jump in/if you have something to contribute.

The problem is that since we defined the Moment of Inertia about the 'regular cartesian axes in bonespace', we're now forced to use these standard axes to define the other angular properties - we can't use an arbitrary rotation axis, we must find (for any arbitrary orientation) the rotations about the standard axes which would achieve it.

We have two Orientations of the BoneBox in BoneSpace, kept as Matrices.

We need to find the angular DIFFERENCE between the two arbitrary 3D orientations as a set of 'euler angles' (think yaw-pitch-roll).

I'm pretty sure that performing matrix subtraction (to measure the angular differece) is a no-no, especially since I'm not sure that the orientation matrices are pure rotation matrices, and anyway, it doesn't help us to obtain the change in angle around x,y and z in bonespace..

We're left with no alternative but to do the following:

-decompose both Matrices to be sure we have pure rotation matrices

-extract a set of Euler angles from each rotation matrix

-perform a vector subtraction on the two sets of Euler angles to find the angular change around X,Y,Z axes

Quite expensive really (even if it does only happen when we enable the physics code), anyone have other ideas?

Reading Chris Hecker's physics articles for the Nth time paid off.

I've always found his stuff a bit dry, but there's gold in them thar hills.

His fourth article in particular is great because the material is presented in two parts : Kinematics and Dynamics.

Dynamics is about how Forces move and rotate our 3D objects, it's what our RagDoll physics is all about.

Kinematics is about calculating the Forces required to reach a particular position/orientation.

It should be obvious that our current problem (initializing the physics state to suit the animation state) is a Kinematics problem, not a Dynamics problem - we know how and where things moved, and we need to calculate the implied Forces involved.

Chris mentions somewhere that "if we define the Angular Velocity as 'the current instantaneous axis of rotation, multiplied by the Rotation Speed', then we now have all we need to calculate Angular Momentum and Torque with respect to the axis of rotation.

Since, according to Chris, it's not necessary that the vector axes of the Angular Velocity and of the orientation matrix be the same, it's perfectly ok to calculate the rotation forces with respect to an arbitrary axis - a statement which is in direct conflict with my previous posting.. I recant my posting and bow to a man with a greater understanding than I.

Well, we're not a heck of a lot better off than we were several posts ago, we've just got a new problem to solve.. our animated orientations are matrices, but we need to convert them to an axis/angle representation in order to calculate the instantaneous angular velocity.

The problem for us is that D3D doesn't supply this functionality - we'll need to code our own "rotation matrix to angle/axis" conversion function.

Note that we'll only need to use this when we first enable the bone physics.. after that, we'll use quaternions to represent orientations (as an intermediate during calculations, with the final representation of course being matrix once more).

The problem for us is that D3D doesn't supply this functionality - we'll need to code our own "rotation matrix to angle/axis" conversion function.

Note that we'll only need to use this when we first enable the bone physics.. after that, we'll use quaternions to represent orientations (as an intermediate during calculations, with the final representation of course being matrix once more).

OK, it's time for me to pick this project up again, and so I'd like to talk about my intentions.

In the current demo, we are able to draw our animated boneboxes in modelspace because we:

1- defined our box points in bonespace

2- already transformed those back into modelspace ('bindpose').

3- are transforming those 'bindpose boxes' using the same 'final transform matrices" that we used to manipulate the mesh.. note: these are currently NOT stored within bones or boneframes, they need to be in a linear array, although we COULD keep pointers..meh.

It's important to note that these matrices we're using actually define the position, rotation and scale of each box in modelspace.. which is good news for us !! After all, perhaps the most important thing we need right now is a way to calculate the bone orientations at the critical moment when we switched off Animation and enabled our Physics.. and those matrices are our ticket to ride.

The following stuff is done just once, at the "critical moment' described earlier..

We can Decompose those matrices to separate the translation, rotation and scale components, and then we have our Orientation matrix , yay :D

If we want, we can obtain some modelspace Euler angles now, but I'm not so sure we need to.. We are able to convert Orientation matrices directly into Orientation quaternions, and I'm pretty sure that if we do all of the above for the current and previous FinalMatrices (so we have the state at two moments in Time), we should be able to extract change in rotation, position and time, and thus obtain an instantaneous set of acceleration values, and finally, momentum (both linear and angular) due to Mass.

Having done ALL of that, we'll be ready to "unleash the beast" and let our physics simulation run on its own (will it blow up? I'll discuss different kinds of Integration soon..)

In the current demo, we are able to draw our animated boneboxes in modelspace because we:

1- defined our box points in bonespace

2- already transformed those back into modelspace ('bindpose').

3- are transforming those 'bindpose boxes' using the same 'final transform matrices" that we used to manipulate the mesh.. note: these are currently NOT stored within bones or boneframes, they need to be in a linear array, although we COULD keep pointers..meh.

It's important to note that these matrices we're using actually define the position, rotation and scale of each box in modelspace.. which is good news for us !! After all, perhaps the most important thing we need right now is a way to calculate the bone orientations at the critical moment when we switched off Animation and enabled our Physics.. and those matrices are our ticket to ride.

The following stuff is done just once, at the "critical moment' described earlier..

We can Decompose those matrices to separate the translation, rotation and scale components, and then we have our Orientation matrix , yay :D

If we want, we can obtain some modelspace Euler angles now, but I'm not so sure we need to.. We are able to convert Orientation matrices directly into Orientation quaternions, and I'm pretty sure that if we do all of the above for the current and previous FinalMatrices (so we have the state at two moments in Time), we should be able to extract change in rotation, position and time, and thus obtain an instantaneous set of acceleration values, and finally, momentum (both linear and angular) due to Mass.

Having done ALL of that, we'll be ready to "unleash the beast" and let our physics simulation run on its own (will it blow up? I'll discuss different kinds of Integration soon..)

I'm not sure exactly how many different Integration algorithms exist, I am neither a mathematician nor a physicist.. however, in terms of physics simulations, I have encountered just three, meaning I can talk about all of them briefly without losing my mind, or your patience.

1A. EULER INTEGRATION : This is the easiest to implement. All others are variants of this.

The Swiss mathematician Leonard Euler (pronounced 'oiler' - let's at least SOUND like we know what we're talking about) lived from 1707 to 1783..

Leonhard Euler was one of top mathematicians of the eighteenth century and the greatest mathematician to come out of Switzerland. He made numerous contributions to almost every mathematics field and was the most prolific mathematics writer of all time. It was said that "Euler calculated without apparent effort, as men breathe...." He was dubbed "Analysis Incarnate" by his peers for his incredible ability.

Unfortunately, his integration algorithm leaves a lot to be desired.. the larger the "timestep" between calculations, the greater the degree of error, with error being compounded over time.. and conversely, if the "timestep" is infinitely small, the error is infinitely small as well...

1B. EULER MIDWAY : This variation on Euler integration works by finding the midpoint between the start and end of the timestep, calculating values for the midpoint and endpoint, and then averaging out the overall error. It effectively halves the amount of error for a given timestep.

2. RUNGE-KUTTA INTEGRATION : This integrator is similar to the Midway variant, but instead of merely calculating one extra point in the middle of each timestep, we now calculate THREE points distributed between the start and end of the timestep, so that we have FOUR lots of calculations per timestep (including the endpoint).. thats why it is sometimes called RK4.

Bumping things up a notch, the values calculated at each point are fed as inputs into the next, so that "the corrections to the error are propagated along the curve" (noting that in physics, almost everything is a curve of some kind). This is a really nice integrator, but it takes 4 times as much calculation as simple Euler, so its 4 times as slow.. and it basically hates you changing the size of the timestep, so you have to get used to "fixed timesteps"..

3. VERLET INTEGRATION : Developed by French mathematician Loup Verlet in 1967 (relatively recently!), this kind of integrator is often used for "molecular dynamics" simulators.. It has some advantages and it has some drawbacks, and is appearing more and more in games because it's cheaper to implement than RK4, while being almost as accurate (eg it was used for the ragdolls in the recent game 'Hitman').

Among its advantages, it's quite easy to apply Constraints to the "particles" which comprise your objects, and quite easy to "go backwards in time".

Among its disadvantages, you basically need a way to obtain the Position of your object at some small delta into the past or future in order to use it, since Velocity is never calculated in the Current TimeStep.. weird huh..

Like RK4, the Verlet integrator HATES VARIABLE TIMESTEPS.

So, the number one reason why people don't talk about Verlet integration is because this integrator effectively needs you to be able to define the Position of your object in at LEAST TWO moments in time, otherwise the bloody thing is useless..but hey, guess what? Our ragdoll models are suitable !! WE CAN USE THIS !!

Verlet is the only kind of integrator that I personally have NEVER written, and I am tempted to: because of that reason, and also because it represents the best tradeoff between speed and accuracy that we can obtain.

You may be wondering - so where is all this math? Why are you waffling about stuff I can't see again, Homer?

I didn't want to burden you with N sets of exceedingly similar equations.. that would be needlessly scary and/or confusing, I figured it was better to describe the algorithms in historical order, so that you might understand why people have been tinkering with Euler's work ever since the late 1700s :)

I'd also prefer Verlet calculation.

Euler's work is constantly a subject at university - Higher Maths 1-3, then Numerical Methods - usually calculating roughly (and very quickly) complex equations, iirc.

Actually I guess Homer isn't presenting the Maths involved, since it will turn you off immediately unless you've studied it at university. The math is actually simple (a lot simpler than what I studied at school...). Anyway, a quick reference to Chris Heckler's articles is at http://www.d6.com/users/checker/dynamics.htm#articles (the 3 pdfs)

And, time-corrected Verlet is

Euler's work is constantly a subject at university - Higher Maths 1-3, then Numerical Methods - usually calculating roughly (and very quickly) complex equations, iirc.

Actually I guess Homer isn't presenting the Maths involved, since it will turn you off immediately unless you've studied it at university. The math is actually simple (a lot simpler than what I studied at school...). Anyway, a quick reference to Chris Heckler's articles is at http://www.d6.com/users/checker/dynamics.htm#articles (the 3 pdfs)

And, time-corrected Verlet is

x* = x** + (x** - x**) * (dt** / dt**) + ax * dt** * dt**;*

"float x[3];" - the x coordinate of the object's position. In 3 points: previous position, current, next.

"float dt[3];" - the time passed between the 3 frames.

"float ax" - the acceleration's x

Likewise, we compute the Y and Z coordinates of the object position.

Though, I still haven't seen code on computing the rotation, constraints and collision-response .. I guess they're just like in Euler-based dynamics engines."float x[3];" - the x coordinate of the object's position. In 3 points: previous position, current, next.

"float dt[3];" - the time passed between the 3 frames.

"float ax" - the acceleration's x

Likewise, we compute the Y and Z coordinates of the object position.

Though, I still haven't seen code on computing the rotation, constraints and collision-response .. I guess they're just like in Euler-based dynamics engines.

*
*Hmm the only thing I don't like about usages of Verlet is the proposed Constraints for collision response:

This 0.5 value... they're basically assuming that both objects have the same mass ... no ... inert force.

We don't want our game having a tennis ball move a skyscraper that it hit ^^.

Instead, we should provide dx (and dy,dz) ourselves - the distance to move, (so that the objects don't penetrate) and Force1 & Force2. Force1 and Force2 (both are Vec3) are derived from mass and velocity. So far, I've done this only with mass involved, on slow-moving objects ^^".

Hmm Homer, maybe it'll be better (than bounding rotated boxes) to use spheres around points of our meshes' bones? And an array of spheres (belonging to one line) when the bone is long? Then, if some sphere of the arm is penetrating the chest (ragdoll fallen sideways on ground), we move both chest and sphere until they don't penetrate. And meanwhile we realign the arm's spheres.

Optimizing this case (ragdoll fallen sideways on ground/object) is rather interesting to me (mostly because of CounterStrike:Source, I guess ^^).

We probably should also cache the group of objects that are in such a complicated situation, and iterate calculations on it, until the level of error is acceptable.

dx = x2-x1;

dy = y2-y1;

dz = z2-z1;

d1 = sqrt(dx*dx+dy*dy+dz*dz);

d2 = 0.5*(d1-r)/d1;

dx = dx*d2;

dy = dy*d2;

dz = dz*d2;

x1 += dx;

x2 -= dx;

y1 += dy;

y2 -= dy;

z1 += dz;

z2 -= dz;

This 0.5 value... they're basically assuming that both objects have the same mass ... no ... inert force.

We don't want our game having a tennis ball move a skyscraper that it hit ^^.

Instead, we should provide dx (and dy,dz) ourselves - the distance to move, (so that the objects don't penetrate) and Force1 & Force2. Force1 and Force2 (both are Vec3) are derived from mass and velocity. So far, I've done this only with mass involved, on slow-moving objects ^^".

Hmm Homer, maybe it'll be better (than bounding rotated boxes) to use spheres around points of our meshes' bones? And an array of spheres (belonging to one line) when the bone is long? Then, if some sphere of the arm is penetrating the chest (ragdoll fallen sideways on ground), we move both chest and sphere until they don't penetrate. And meanwhile we realign the arm's spheres.

Optimizing this case (ragdoll fallen sideways on ground/object) is rather interesting to me (mostly because of CounterStrike:Source, I guess ^^).

We probably should also cache the group of objects that are in such a complicated situation, and iterate calculations on it, until the level of error is acceptable.

Yeah, I do mention Chris Hecker's work in at least one previous post.. it's almost obligatory that I do, since almost everything ELSE I've read quotes his work and credits him :)

Please note that the 'constraint' you posted is an EXAMPLE constraint, those formula are NOT written in stone, in fact, as gamedevs, we are encouraged to CHEAT wherever we can get away with it - ie , as long as things LOOK believable, we've succeeded.

I imagine that constraint you posted was developed for a "pool physics" demo, where we have N spheres of equal mass.

Actually, it appears again in this article by Thomas Jakobsen from IO Interactive, the guy who developed the physics behind Hitman, and this article is one hell of a good read.. http://www.teknikus.dk/tj/gdc2001.htm

He doesn't mention time correction in this article, but he does cover a lot of ground and there's an abundance of useful information : it's probably the best article on the subject I have ever read.

I took away a few new ideas from that article, one of which renders your Spheres idea redundant.. he talks about the cheapest way of resolving object interpenetration, and shows how to use point constraints to achieve it... effectively, we are performing sphere tests using the vertices of the bounding hull..

I agree that a bunch of boxes generated at runtime is neither an elegant nor an efficient solution.. my own idea of a "desirable scenario" involves making the 3D artist responsible for creating the bounding hull and storing it as a separate (textureless) mesh, but still bound to the skeleton and thus animated along with the mesh it surrounds.. my idea of a "best case scenario" is that bounding hull being nothing more than a subset of the actual mesh vertices, just enough of them to form the most simple animated hull possible, with the tightest fit possible.. note that means zero overhead for deriving the animated vertex positions of the hull at runtime.

Nonetheless, I will persevere with the existing framework simply because it's easy to understand.. I can rework my implementation later, this is a public demo/educational project ;)

You were right about angular dynamics under Verlet, it's the same, that's why it's never discussed in any of the material I've seen. Verlet is merely an INTEGRATION METHOD, ie, an algorithm for getting from physics state A to physics state B - the standard physics formulae (Euler, Newton et al) are still at work behind the scenes.. they always are, which is why Euler's stuff is standard fare at most colleges around the world..

Anyhow, there's just one or two more things I'd like to say regarding my ever-growing fondness for Verlet: I am no longer required to calculate either the initial momentums OR the initial velocities (angular and linear), the only requirement for initializing the physics state is that we can describe the Position (and Orientation!!) of the object at two moments in time : the current animation frame, and the previous one, provide everything we want :)

Don't be fooled into thinking that Verlet only works "if the object was already moving", its perfectly ok for the physics state to describe 'no change in orientation / position' :)

I think my mind is made up :)

Please note that the 'constraint' you posted is an EXAMPLE constraint, those formula are NOT written in stone, in fact, as gamedevs, we are encouraged to CHEAT wherever we can get away with it - ie , as long as things LOOK believable, we've succeeded.

I imagine that constraint you posted was developed for a "pool physics" demo, where we have N spheres of equal mass.

Actually, it appears again in this article by Thomas Jakobsen from IO Interactive, the guy who developed the physics behind Hitman, and this article is one hell of a good read.. http://www.teknikus.dk/tj/gdc2001.htm

He doesn't mention time correction in this article, but he does cover a lot of ground and there's an abundance of useful information : it's probably the best article on the subject I have ever read.

I took away a few new ideas from that article, one of which renders your Spheres idea redundant.. he talks about the cheapest way of resolving object interpenetration, and shows how to use point constraints to achieve it... effectively, we are performing sphere tests using the vertices of the bounding hull..

I agree that a bunch of boxes generated at runtime is neither an elegant nor an efficient solution.. my own idea of a "desirable scenario" involves making the 3D artist responsible for creating the bounding hull and storing it as a separate (textureless) mesh, but still bound to the skeleton and thus animated along with the mesh it surrounds.. my idea of a "best case scenario" is that bounding hull being nothing more than a subset of the actual mesh vertices, just enough of them to form the most simple animated hull possible, with the tightest fit possible.. note that means zero overhead for deriving the animated vertex positions of the hull at runtime.

Nonetheless, I will persevere with the existing framework simply because it's easy to understand.. I can rework my implementation later, this is a public demo/educational project ;)

You were right about angular dynamics under Verlet, it's the same, that's why it's never discussed in any of the material I've seen. Verlet is merely an INTEGRATION METHOD, ie, an algorithm for getting from physics state A to physics state B - the standard physics formulae (Euler, Newton et al) are still at work behind the scenes.. they always are, which is why Euler's stuff is standard fare at most colleges around the world..

Anyhow, there's just one or two more things I'd like to say regarding my ever-growing fondness for Verlet: I am no longer required to calculate either the initial momentums OR the initial velocities (angular and linear), the only requirement for initializing the physics state is that we can describe the Position (and Orientation!!) of the object at two moments in time : the current animation frame, and the previous one, provide everything we want :)

Don't be fooled into thinking that Verlet only works "if the object was already moving", its perfectly ok for the physics state to describe 'no change in orientation / position' :)

I think my mind is made up :)

Maybe this info could be of some interest to you too:

Just out of curiousity, I installed HL2 Source SDK, and took a peek at the models' physics definitions. Turned out they (the physics layer of .mdl) consist entirely of convex solid triangular meshes, usually 15 of them (for human models), each phys object:

- has 20-70 triangles.

- has a bounding-box (automatically generated at compile-time), [ no AABB] .

- X,Y,Z constraints: min:max

- mass. 90.0 being the default value

- friction: 1.0 up to 1000.0

- mass bias 0.0 to 10.0 , 1.0 is default

- inertia 0.0 to 10.0, 10.0 default

- damping 0.0 to 1.0 , 0.01 default

- rotation damping 0.0 to 10.0 , 1.5 default

- material: string. Something like "flesh" or "metal"

Meanwhile, the real (drawn) submeshes, attached to the bone, have their own hitboxes, too :/

Sry if this could steer you out of your planned code ^^"

Just out of curiousity, I installed HL2 Source SDK, and took a peek at the models' physics definitions. Turned out they (the physics layer of .mdl) consist entirely of convex solid triangular meshes, usually 15 of them (for human models), each phys object:

- has 20-70 triangles.

- has a bounding-box (automatically generated at compile-time), [ no AABB] .

- X,Y,Z constraints: min:max

- mass. 90.0 being the default value

- friction: 1.0 up to 1000.0

- mass bias 0.0 to 10.0 , 1.0 is default

- inertia 0.0 to 10.0, 10.0 default

- damping 0.0 to 1.0 , 0.01 default

- rotation damping 0.0 to 10.0 , 1.5 default

- material: string. Something like "flesh" or "metal"

Meanwhile, the real (drawn) submeshes, attached to the bone, have their own hitboxes, too :/

Sry if this could steer you out of your planned code ^^"

That's quite interesting :)

I guess you could argue endlessly about which kind of theoretical geometric primitives to use for the collision hull, I don't know if you bothered to read the article I linked to in my previous post, it describes using the simplest 3D geometric primitive, ie tetrahedrons (four pointed pyramids). The reason that tetrahedrons were selected is because in a Verlet rigidbody simulation you must use point constraints between various pairs of points in the body, and ideally between each point and all other points.. tetrahedrons require just FOUR constraints.

The more points we add, the more constraints must be enforced, and the more constraints we enforce, the more likely it is that at least one constraint is going to be violated in some collision situation.

Still, I don't feel that tetrahedrons are useful for defining volumes of space (ie collision hulls), and neither did the author of that article, because he also talks about using cylinders as collision hulls (which opens the possibility for using tapered cylinders and elliptical cylinders etc). Perhaps these offer more flexibility than spheres or ellipses.

Jakobsen's verlet physics are applied not to any 3D primitives nor the mesh .. they are applied to a point representation of the model (looks like a stick figure). This stick figure is manipulated under physics, and it is used to control the bones, which in turn control the collision hull and the mesh... under collision response, vice versa : the collision hull controls the bones which controls the stick figure.. with me?

I was thinking I might be able to define such a stick figure at runtime by examining the bone endpoints for limbs..

I guess you could argue endlessly about which kind of theoretical geometric primitives to use for the collision hull, I don't know if you bothered to read the article I linked to in my previous post, it describes using the simplest 3D geometric primitive, ie tetrahedrons (four pointed pyramids). The reason that tetrahedrons were selected is because in a Verlet rigidbody simulation you must use point constraints between various pairs of points in the body, and ideally between each point and all other points.. tetrahedrons require just FOUR constraints.

The more points we add, the more constraints must be enforced, and the more constraints we enforce, the more likely it is that at least one constraint is going to be violated in some collision situation.

Still, I don't feel that tetrahedrons are useful for defining volumes of space (ie collision hulls), and neither did the author of that article, because he also talks about using cylinders as collision hulls (which opens the possibility for using tapered cylinders and elliptical cylinders etc). Perhaps these offer more flexibility than spheres or ellipses.

Jakobsen's verlet physics are applied not to any 3D primitives nor the mesh .. they are applied to a point representation of the model (looks like a stick figure). This stick figure is manipulated under physics, and it is used to control the bones, which in turn control the collision hull and the mesh... under collision response, vice versa : the collision hull controls the bones which controls the stick figure.. with me?

I was thinking I might be able to define such a stick figure at runtime by examining the bone endpoints for limbs..

Yes, of course I read the article (re-read 3+ times until completely understood ^^"), and many related materials - I am keen on using that approach, too :D

I still haven't found any suitable sourcecode demonstrating the verlet technique - how about you? Everything I found so far has been devoted to cloth simulation (think of 3d flag demos).

Worse (imho) is that all the literature I've read has stated that "orientation is arbitrary and must be extracted from the particle position data", ie, it seems a lot of people are 'cheating' in regards to rotational dynamics, even going so far as to "implement constraints which roughly emulate an angular inertia tensor".. basically what they are saying is that we should just move all the points about in space, with rotations being implicit rather than explicit (eg, we should build our orientation matrix at runtime by analyzing the positions of our particles in relation to one another and in relation to the body's local origin).

I was initially led to believe (and still do believe) that verlet is MERELY an integration algo and that we can and SHOULD implement an impulse-driven rotation scheme as usual.

Worse (imho) is that all the literature I've read has stated that "orientation is arbitrary and must be extracted from the particle position data", ie, it seems a lot of people are 'cheating' in regards to rotational dynamics, even going so far as to "implement constraints which roughly emulate an angular inertia tensor".. basically what they are saying is that we should just move all the points about in space, with rotations being implicit rather than explicit (eg, we should build our orientation matrix at runtime by analyzing the positions of our particles in relation to one another and in relation to the body's local origin).

I was initially led to believe (and still do believe) that verlet is MERELY an integration algo and that we can and SHOULD implement an impulse-driven rotation scheme as usual.

Just a quick reply on a quick idea to solve rotation - make a "heavy" particle, that is in the mass-center of the object. Of course, constrain all/most other particles to it. This "heavy" particle would be the hardest to move on indirect collision (at a nonperpendicular angle), and would be as "light" as the other particles on a direct hit (the mass-center, collission-point and collider's acceleration vector are aligned). This might work for the first frame after collision, if the collider isn't rotating (or its friction=0).

no, I haven't found anything about rotation with verlet :(

no, I haven't found anything about rotation with verlet :(

How to handle rotations in regards to Verlet:

Basically, imagine the rigid body as a "particle cloud".

Each particle wants to continue moving in the general direction it already was moving (conservation).

If our "particle cloud" bumps into something, we correct the position of the first offending particle, and we correct the positions of all other particles in the body via our distal constraints, with respect to the offender particle.. which causes rotation to occur without any mention of forces or orientation.

It's really braindead, but the degree of error isn't really that noticeable (it sure worked well in Hitman and its sequels).

I disagree with all the above nonsense.

We can treat rotation the same way as we do position under the Verlet scheme.. I think :)

That is to say, perhaps we can implement a Verlet-style 'velocity-less' algorithm which calculates more accurate force-based (impulse-driven) collision responses? :)

Basically, imagine the rigid body as a "particle cloud".

Each particle wants to continue moving in the general direction it already was moving (conservation).

If our "particle cloud" bumps into something, we correct the position of the first offending particle, and we correct the positions of all other particles in the body via our distal constraints, with respect to the offender particle.. which causes rotation to occur without any mention of forces or orientation.

It's really braindead, but the degree of error isn't really that noticeable (it sure worked well in Hitman and its sequels).

I disagree with all the above nonsense.

We can treat rotation the same way as we do position under the Verlet scheme.. I think :)

That is to say, perhaps we can implement a Verlet-style 'velocity-less' algorithm which calculates more accurate force-based (impulse-driven) collision responses? :)