I've completed an importer for OBJ files.
Since its an ascii file format, it's a glorified plaintext parser.

During loading, I take the time to create an array of "unique 3D values".
For vertex positions and vertex normals, I firstly add them to this "global array of uniques", noting the returned index (new or existing), which I associate with the "raw 3D value" before stowing it as well.
eg I use a Vec4 to store a Vec3, with the W field containing the unique index of that Vec3 value.
After loading is completed, I go through the FACES, replacing existing "raw" indices with the "unique indices" I stored earlier. Finally, I throw away the "raw" arrays for positions and normals.
Thus I have eliminated duplication of Vec3 values through reindexing to an array free of duplicates, and have done so with the absolute minimum of effort.

In my testbed example, I have a 3D textured cube with some 300 indexed faces, comprised of : 152 raw vertices and 216 raw normals. Seems already optimized? Guess again. Of these total 368 values, 158 are unique. After reindexing, my highest index is 157 (0-based).
We could, if we wanted to, save the optimized mesh at this point, and it would always be valid.

We can load more models and achieve logarithmic savings, but care must be taken.
We must note that this optimisation is runtime only.
If we load several models in this fashion, any common values will show up and be optimized out of existance, but are only valid in the context of each other - we shouldn't save models if >1 model is loaded. We can't rely on the uniques array in such conditions, as we can't be sure that the indices will always be sane...

I've uploaded a copy of the binary and OBJ files to http://homer.ultrano.com/Upload/BSPGenerator.zip
After running the executable, a textfile has been created to log runtime events.
Compare it to the OBJ file :)
I will provide full source on request.

Posted on 2005-03-10 20:26:45 by Homer
I've added code to calculate SurfaceNormals from Triangle Vertices, as well as "ClassifyPoint" procedure and a loop to test each triangle's plane against an arbitrary 3d point.
ClassifyPoint compares a 3D point to a Plane (given as a PointOnPlane and SurfaceNormal).
It tells you whether the point is (behind, infront of, smack on) the Plane.
I've written the driver procedures for my BSPTree Generator.
The Generator relies heavily on the ClassifyPoint procedure, so it was important to test it well.
Another procedure used heavily by the Generator is "SplitPolygon".
It's name is misleading, it classifies a Triangle and a Plane, by calling ClassifyPoint for each Triangle Vertex.
It can do more, but I'll keep this simple.
The Generator uses a procedure called "SelectPartitionFromList" to choose the best Splitting Plane for each node of the BSP Tree. In essence this amounts to classifying (every triangle) against the Plane of (every other triangle), and deciding which triangle's plane creates the fewest splits while also dividing the geometry most evenly across the plane.
Having decided which is the best triangle's plane to use, the world is split across that plane into two subworlds (lists of triangles), and this is repeated until no triangles remain.

Posted on 2005-03-14 09:31:58 by Homer
Some time ago you asked in another thread what 3D formats we use for animation. I've been researching on this a lot and came to some conclusions (which might not be correct, I am a newbie in 3D). And I might not be able to fully express myself. But hey, here's my 2 cents:

.obj (WaveFront) :
Pros:

  • Easy to convert to your own binary format. Though, it's not very nice to have to make your own binary format

  • Supports non-textured polys (a big pro for my software engine btw)


Cons:

  • The syntax can become hell when lots of options are added to faces - vertex normals, texture coords, more than 3 vertices..... Expandability is not always a good thing.

  • Not all 3D modelers make an .mtl file to define materials - and thus you have to make this manually. Actually I've seen this file generated only by a non-modeler app (a 3D converter) !

  • Animation is impossible. Unless you divide your model to different meshes, then in your converter bind them together with some bones.... I don't think many people would try this.

  • Texture blending (additive/normal/subtractive...)- I have never seen it available here



.md2 (Quake2) :
Pros:

  • Extremely easy to convert/import

  • If you don't care to spend a few more kB RAM, and if you want lots of CPU - this is your choice. Textures take much more memory anyway.

  • Normals are already computed (though you might need to convert them a bit on loadtime)

  • Animation can be very fluid if you want. You might be able to interpolate between keyframes, I think. Unnormal stuff (usually morphing) is really easy to implement here - like a ball morphing into a cube, a ship exploding into tiny polys,...you name it

  • Support for texture-blending commands (iirc, but have never seen it explicitly)

  • Almost all modelers, converters and animators support this - either natively or via a plugin


Cons:

  • RAM usage - not a problem anymore, no gamer has less than 256MB anyway, and I doubt gamedevelopers will soon use thousands of different animated models to make this a difference

  • Animation is not well-suited for organic or humanoid models. It'll be a feat to make fluid, reallistically-looking human actions

  • No support for non-textured polys, though it is not so useful anyway these days. And you can do the trick via tricks with UVmapping




.mdl (halflife) :
note: I am not sure, but I think .x files are really similar. I have to check this soon, I looked at .x files much earlier than I learnt what a mesh is lol
Pros:

  • Fluid reallistic animation at your fingertips. This is the best what this format can give you, and this is the best format to give you this feature ^_^

  • less RAM (not much of a difference though)


Cons:

  • Lots of CPU needed for animation, compared to other methods. This is the drawback for getting such good animation

  • Complexity of the animation part of your engine

  • No freeware converter/modeler/animator that I know exports to this format. MilkShape3D is perfect for this job, but online shopping is not easily accessible to some people (like me), especially if the author(s) reject wire transfers

  • uhm...has anyone here got the .mdl file format description?




It'd be awesome to have a freeware/opensource animator that imports .obj/.md2, and saves to a file format that is a hybrid between md2 and mdl in the following way:
- assume we don't use face-normals.
- all is in one mesh
- each vertex is bound to one of the bones (not 0 or more than 1), forming mesh sub-groups. (like head, or chest). These sub-groups act like separate meshes, each as if imported from an .obj file.
- submeshes don't share vertices (just to make it clear from the start). Though, they can/will be connected with faces. (that's why I'd like to put it all in one mesh)

(basically, we skip the weight and bindings of vertices to many bones)


pros:
Faces that connect the sub-meshes can easily act as "organic glue".
Engine is as easy as it can get, animating from the current state to a state we have to smoothly transform is also a piece of cake.
This animation format saves a lot of cpu, but some ugly distortions will arise in cases when rotation angle between two glued sub-meshes (their bones actually)  is more than pi/6. For example, a human model's neck - if we rotate the submesh "head" too much, the faces that connect it to the chest will deform (imagine we turn his head 180 degrees to the right). The faces here will basically form a fan. To fix this, we'll add a bone whose parent is the "head" bone, and in the animator we'll rotate this bone _manually_ . We'll cut the neck polys in half horizontally, to get more vertices there, and twice the polys. Bind the new vertices to the new bone, and to the manual animation ^^".
cons:
Can distort the object in unusual cases unless we manually prevent this on the modeling stage of the model animation creation.
Not so fluid animation as in .mdl

From Scorpie's last topic I think MilkShape has support for such a format , yet I don't know how this format is called - and my trial period for that app is over now ^^"

my conclusion - if you can get your hands on a mdl-exporting animation app, mdl is worth trying
for garage games on my PDA, I'd use md2 (and .obj for static meshes).

comments and heavy critics welcome :)
Posted on 2005-03-14 14:55:07 by Ultrano
I've decided that I no longer need to be concerned that I can't export to my favorite (or the latest) file formats directly from my favorite 3D modeller/animator tool.
I only have to be able to load the simplest and most universal of file formats, then I can write my own exporters, and/or my own file formats, in my own code.

I stole this idea from the Quake console's ability to import non-native geometry files at runtime.

If I recall correctly, I posted a document describing the md2, md3 and mdl file formats in detail, contrasting the differences.
Basically, md2 is ok for static objects, but it doesnt do animation very well.
It creates "seams at the joints" of your animations. It does however allow for a fully-segmented body that is easily "blown to bits".
Md3 goes some way to remedying this by using bone animation to animate 2 or 3 bodyparts, so now the seams between joints are reduced to the joins between bodyparts (legs, torso, head).
But we lost the ability to blow the body into lots of pieces.

Mdl format was derived from the short-lived md4 (I describe in post, link below) by the guys who developed Half-Life (Valve based their engine on the original Quake engine).
http://www.asmcommunity.net/board/index.php?topic=20031.0

Here's a detailed expose on mdl format, I assume it's the same one but didn't verify:
http://astronomy.swin.edu.au/~pbourke/geomformats/mdl/

Glad to see you showing interest in this stuff at last :)


Posted on 2005-03-14 21:06:06 by Homer
My OBJ importer code currently does not handle Quads.
It expects that:
1> The model has been Triangulated (contains triangles only)
2> The model has PerVertex Normals (for really nice lighting)

That means face descriptions look like:
f 1/1/1 2/2/2 3/3/3 (position/uv/normal for each triangle vertex)
and of course these are 1-based Indices into the respective arrays.
My loader compresses the position and normal array by creating
a single array free of duplications and reindexing the faces (0-based)
to match the indices in the new array.

I could easily add support for Quads in the Loader, I just wanted instant gratification.
f 1/1/1 2/2/2 3/3/3 4/4/4 really represents two triangles as follows:
f 1/1/1 2/2/2 3/3/3
f 1/1/1 3/3/3 4/4/4

I don't actually use the PerVertex Normals for anything yet (no render code), but I load them anyway.
I calculate PerSurface Normals from the Triangle Edges, rather than Average the PerVertex Normals, because it's more accurate (divide by three is nasty).

I'd like to get the BSPGen stuff working, then incorporate it into my existing Console framework.
Basically I'm worrying more about the static part of the World at this time, rather than the (animated or not) object models which will eventually populate it.
Posted on 2005-03-14 21:17:10 by Homer