As I write the Animated SkinMesh loader, it occurs to me that I maybe have wasted a little time? Ultimately I wish to be able to dismember my models !!
The article on subdivided meshes sheds some light on the issue, but presents nothing in regards to maintaining a psuedo-skinmesh..
The problem I'm outlining here is that the skinmesh itself contains no provision for multiple meshes within a skinmesh - hierarchy or not, am I correct?
However a solution is revealed within the skinmesh loader code, as already we are re-skinning the model from the mesh and bone data provided in a skinmesh ... What's needed then, is a coherent way to keep the bodypart meshes separately, and yet treat their combined vertex data as a single skinmesh. It seems to me that we could create a single indexed vertex buffer in realtime, with the indices stored in slabs in the nodes of the "body hierarchy". In any event, it seems that a carefully constructed scenefile could be the answer. Any thoughts?
Posted on 2002-10-26 09:43:46 by Homer
As Metal Mickey would say, "boogie boogie boogie!!"

If D3DXSkinMesh inherits from D3DXMesh, then the AttributeID buffer should contain an array of dwords in the same order as the faceindices in the indexbuffer, one dword per face in the Mesh (face=triangle)
Each should be automatically numbered according to the zero-based index of the Material associated with them. That is to say, if theres only one material, they'd all be zero. But if theres more than one material in this mesh, then the AttributeID buffer data will reflect the material each face should be rendered with.
If we assign one material per bodypart, we should end up with nonzero values in the AttributeID buffer - if not, then D3DXSkinMesh does not inherit from D3DXMesh, and we have to perform the same tasks using the AttributeID buffer associated with the original Mesh (and not the SkinMesh).
Now we can call OptimizeInPlace or Optimize to sort the AttributeID values into cohesive lists of the same value (subset lists) while re-prdering the indexbuffer values to suit, and we now merely need to associate the startaddress and size of each subset in the buffer with a node in our hierarchy, since each subset now represents the faces associated with a bodypart!!!
We've cracked it !!!
(untested theories are the best kind - they keep u warm at night)
Posted on 2002-10-27 10:23:31 by Homer
I'm still having some major headaches getting OptimizeInPlace to work!
For now what I'm gonna do is try the following:
I'm going to create a simple skinmesh in Maya4, apply multiple materials to it, and then I'm going to optimize it within Maya and then try loading it with my loader and examining the AttributeID buffer of the SkinMesh, and ensure that the materials are being referenced per triangle in this data.
I imagine for the moment that I'm failing because I'm assuming that I can use the adjacency data from the SkinMesh when optimizing the Mesh, when I probably should fetch the adjacency data of the Mesh when optimizing the Mesh.
I assumed that SkinMesh inherited from Mesh and that I could cross over.
I further assumed that the adjacency data for a skinmesh would be identical to the adjacency data for the "original mesh" of that skinmesh.
To assume is to make an ass out of u and me.

I'd like to point out that if this experiment works, it means that Materials can be applied totally arbitrarily in reference to the submeshes making a skinmesh.
I mean, you can have the "breakpoints" where limbs will be severed at places other than the ends of the submeshes. That means you can chop off an arm just a few inches below the shoulder, for example, and the stump will still be animated!
lmao I honestly hope this works because I see it as a positive.
Posted on 2002-10-28 23:46:04 by Homer
Having ascertained some more facts about SkinMesh, I'd like to lay them out here.

After we call LoadSkinMeshFromXof, we get returned to us a pointer to a D3DXSkinMesh interface. This interface contains all the information about the SkinMesh and Bones hierarchy etc, but it contains nothing to help us actually rend the skinmesh to the screen. To do this, we're going to need a pointer to a D3DXMesh interface. More on that in a moment.
Having loaded the SkinMesh, we now have access to the SKinMesh's data, as I mentioned. The faces of the skinmesh have been sorted into subsets associated with Bones. In an ideal world, each subset would also use a unique Material.
These subsets of the skinmesh were defined as "groups" of surfaces at design-time. MilkShape's skinmesh modeller makes this clearer by explicitly demanding one material per Group.
Now then, back in the real world, we wish to rend the skinmesh using subsets of faces sorted by material, so that we can have the least possible texture changes during rending. If we design the model "correctly", specifying one Material per subset... we should not need to perform any more sorting (optimizing) !!
All we should have to do is get into the AttributeID data of the mesh and count the number of faces in each subset, and record that in each node of the bone hierarchy. The face indices "should" already be sorted!!
We can even verify this during the parsing of the AttributeID data because successive material indices should be the same or larger, and never smaller.

So let it be Written, so let it be Done.

(Wonder if anyone else is looking at this stuff - very little feedback :confused: )
Posted on 2002-10-31 22:26:41 by Homer
I'm reading your posts EvilHomer2k.

It sounds VERY interesting and it's the sort of thing I want to get into, altho it's completely over my head at this stage. I'm trying my best to learn DX8 but I don't have a great deal of time at the moment.

Keep up the great work tho

:alright:
Maelstrom
Posted on 2002-10-31 22:49:27 by Maelstrom
Hi ! EvilHomer2k :)
I read also your posts
It is very technical I think that it is the reason of not enough reaction.
But your works seem big and I encourage you to continue:alright:
Friendly......Gges
Posted on 2002-11-01 02:55:21 by Asmgges
More hardcore SkinMesh facts.
Your SkinMesh should be created as a single Mesh entity, and not as a hierarchy or group of Meshes. The reason is that LoadSkinMeshFromXof will assume that the first mesh it encounters in the xfile is the skinmesh, and will load that mesh and its material(s) only. So it should probably be the ONLY mesh in the xfile.

In my implementation, I have decided to create within my SkinMesh some surfaces which normally are completely hidden inside the mesh. These surfaces will be revealed when bodyparts are "severed" during rending. When a hidden surface is revealed, it is because the surfaces which were obscuring it are currently invisible. This is how I will dismember my models.

The surfaces in my "subdivided skinmeshes" must be divided into groups of common Material, and we must never, ever re-use a Material. We should always duplicate the Material if we need to do that. The reason is because we are in fact using the Materials to associate a group of surfaces with a bodypart. This way, if that bodypart is "switched off", we know exactly which surfaces NOT to draw without having to do any sorting at all during runtime. If we used the same Material to draw more than one bodypart, we would find it difficult to isolate the two subsets of surfaces from one another. If they use different Materials, we can use the Material Indices to identify surfaces in each subset easily during Loading.

Thanks for the feedback...
As usual, any comment, suggestions, ideas or other feedback is greatly appreciated. Don't be afraid to look stupid, I'm doing a fine job of that, and I haven't had a load of napalm land anywhere near me thus far ...
Posted on 2002-11-01 08:43:01 by Homer
I neglected to mention that I am planning to implement the following logic to handle treating severed limbs as world objects.

They will be a special kind of instanced world object, with a structure which contains pointers back to the "invisible parent" for the mesh subset, but having local unique data (position,orientation etc etc)

This way I can in theory make a pile of arms and legs, and another pile of skulls.
I can also chop off a hand, see it fall to the ground, and then kick it across the room. Instanced objects are a great idea, and this situation lends itself well to instancing.
Posted on 2002-11-01 08:53:01 by Homer
I've come to use the following system to create my obscured faces inside the skinmesh (the "stump" faces)...

I start with a polygonal skinmesh model, and I create a transparent blue material and apply it to the entire model.
I create a solid material in another color.
I now select all the faces of a part of the body I'd like to chop off (say, a hand), and I apply the solid material to that part. Because the rest is transparent, I can see inside the mesh except for the hand. Now I can more easily identify the border edges. Using the Create Polygons tool with Snap To Points enabled, I create a new face inside the mesh. I then duplicate this face, and reverse the normal of the duplicate face. I now have created two faces , one for either severed bodypart, I don't bother assigning them materials at this moment. Now I move up the model, repeating this process with a unique material for each subsection of the model.This results in a large number of submeshes.
In order to force collapsing of the meshes, I then export all the meshes using the OBJ format, and then import it back in, with "create multiple objects" disabled in the Import options. Now I have a single mesh which contains internal doublesided faces, but I lost my Materials. I have to repeat the step of assigning all the subsections of the body a unique Material.
Once we've performed this step, we can then assign the Material textures. This time we're also going to assign Materials to the internal faces. I'm going to point out that there's no reason why one Texture can't be shared by multiple Materials, so you could still have them all use one or two textures rather than one unique texture per Material. That's what I'm going to do, use just two textures, one for the Body and one for the Head. I'm using the Materials to tag triangles in a bodypart, while maintaining texture integrity.
Now we have assigned to all surfaces a Material and to all Materials a Texture, we can now perform uv texture mapping.
At this point we can create some uv templates as jpg's and create some textures in an art package, overdrawing the uv boundaries in the image. Here we get to draw the yummy stump detail :)
Finally we can load the texures in and we can now export to xfile, confident that our skinmesh will retain the material and texture encodings for being ripped into bloody pieces by a hideous monster for our amusement.
Posted on 2002-11-03 07:48:34 by Homer
A couple of visual aides ...
Posted on 2002-11-04 02:15:47 by Homer
and...
Posted on 2002-11-04 02:16:27 by Homer
Now you can see clearly how I am using Materials to group surfaces in the model.
If we could control visibility of a Material group, you can understand how the capping ("stump") surfaces become visible when they are no longer obscured.
It's important to note that the Materials although they specify textures are more importantly being used to identify surfaces to a bodypart.
The bodypart hierarchy is not described explicity anywhere in the file data, and does not necessarily conform to the Bone hierarchy.
That is to say, a bodypart is defined by Material and nothing else.
The name of the Material is the name of the bodypart.
The hierarchy of the body is discovered by walking the bone hierarchy and creating bodyparts named for each Material when a reference to a Material is discovered.
When rending we will walk our Body Hierarchy, where the BodyPart nodes will contain the Visibility flag per bodypart,and a pointer to a buffer containing face indices belonging to that bodypart.
This system makes much more sense.
We'll still have to walk the bone hierarchy for transforming the mesh, but we'll encapsulate that code in another procedure.
I feel so alone - has anyone anywhere in the world attempted what I'm describing? The only people who seem to understand are c++ coders, and they tell me that I'm doing it wrong, but I fail to see any reason why I have to follow blindly, and they can't give me any reason why my concept is unsound.
Posted on 2002-11-05 01:12:21 by Homer
Altho it's completely over my head your system sounds alright to me, I say keep pushing forward and ignore the naysayers. Anyway, you'll learn more by doing rather than following and who's to say your method isn't better.

Are you going to support regional damage? If so, couldn't you use the material id of the impacted poly to determine the impact animation?

Keep up the great work!

:alright:
Maelstrom
Posted on 2002-11-05 06:32:29 by Maelstrom
I intend to implement nonlinear blending of clips (nonlinear keyframed animation) to provide for exactly that. Yes, the system will support regional damage, and regional animations will be blended with the current major animation.
However that's still a little ways off for now, I'll think big and start small.
Right now I'm just glad to be on track, having convinced at least myself that I am not going to hit any walls...
Posted on 2002-11-05 21:12:46 by Homer
I love technical writing. Its amazing on what your trying to do, I mean, I honstly don't know about 3d programing but this really helpped a bit.

But isn't all this just a long way of saying "I wan't a game engine that allows arms, legs and heads to get blown off"? hehe

Can I ask you a stupid simple question? Does DirectX 8 handle all that? That is (I used 3dmax) when I make a model in my rendering program, its easy to make it move and dance like a monkey, but can I just export the mesh with bones and use it as it is in Direct X, and make it dance like a monkey in real time?

(Though probery have to skim down the 30k tringales the monkey makes up:P)

Big fan of Monkey Island;)
Posted on 2002-11-30 16:09:53 by WarlockD
Ultimately, that's the idea.
I use Maya, but the package you create / animate the model with is irrelevant, provided that you export it to a standard format, like X.
Yep, a long way of saying I wish to blow my models into bits.
I could cheat and create a set of matching bodypart meshes for each model, but then that would be resource-hungry, and wouldn't allow for animating the blown up bits to make them look more grotesque, would it ? :)

SkinMeshes were devised to get away from the idea of seams between moving parts of an animated mesh. Therefore, skinmeshes don't provide for being segmented in the regular sense. What I'm doing is bastardising the principle on which skinmeshes were founded, whilst exploiting the system of index-to-bone association that skinmesh uses in its bone attributes.

It's not as simple as just loading the xfile and calling some magic function that draws your animated model on the screen, unfortunately.
The animations you made are just a whole bunch of "4x4 matrices" used for matrix transformations. You have to load the bone data from the xfile into a hierarchy in memory, load the bone animation matrices, and then in your program, apply the appropriate matrices to the appropriate vertices in the mesh according to the passing of Time, and then draw the mesh faces in realtime from the modified vertex data... not so simple after all !! But certainly doable...
Posted on 2002-11-30 23:32:55 by Homer
I am wondering, is there a way you can just do a generic disintegration?

That is your hit in a part, and you watch the entire mesh get blown appart? I am thinking along the lines of getting hit, say in the chest, and you see the mesh rip apart around the object.

The thinking I have is how easy could it be. I mean, from what I read, you could make a recursion algorithm that starts at the closes vertex, distortes its face its connected to, then breakes it appart. Then it continues to the next few vertexes thats its connected to, etc.

How resource hungry is that?

Hummm

It just occured to me, but could you set it up, so when your hit, it runs till it hits points that are flagged. When it hits those flaged points, those points make a face, and split off from the exisiting mesh.

Er humm. Mabey you already discussed this:P Just 2 am over here, and just noticed the reply:)

But I am curious about the processing needed to do that kind of caulations. I want to make spaceships blow up, without rezorting to a generic fire bmp:P

(PS Not used to MaxScripts yet, so not sure if I could implment such an idea into any models I got:P)
Posted on 2002-12-02 00:56:16 by WarlockD
Actually, its VERY doable, and furthermore, its much cheaper than you think.
I use linked lists a lot.
My linked lists can contain just about anything.
I keep the skeleton joint hierarchy in one linked list, and I keep the bodypart hierarchy in another linked list.
Nodes in a Linked List contain pointers to previous and subsequent objects in hierarchical fashion.
We can simply walk back up the bodypart hierarchy, and create a runtime list of what needs to be drawn, and tag each item with an incremental Time when it should occur...now the parts furthest from the blast will be disintegrated last.

Should we wish to futher fracture the mesh into flying faces, we can do that too, but do it in terms of bodypart facegroups.

Heh. I like it.

The game I'm developing has no conventional weaponry.
I decided to go for magic, seeing as tommyguns and bazookas have been done to death in recent times.

Disintegration would make a lovely effect.
I was planning on treating bodyparts as particle meshes, under my particle engine. Think I still will do that.

He Who Dares, Wins...
Posted on 2002-12-02 01:06:23 by Homer