Added (but did not yet hook up) a new object to abstract keyboard input.
It maps between Virtual KeyCodes and "GameEngine User Input Codes" using the "xlatb" opcode (yeah, pointless, we can do our own addressing, why DOES this opcode exist? Is it faster? I just felt like using it).
This will facilitate user-generated keyboard mappings.
Both forward and reverse lookups are supported, so I can display a list of key bindings, and more importantly, quickly determine which commands are NOT bound (possibly due to user remapping).

I also added code to allow the user to skip the intro movie by pressing space.
Those things can be annoying after the 50th time.
Might force the user to watch it once via a regkey, but after that, meh.
Posted on 2009-09-25 07:05:03 by Homer

It maps between Virtual KeyCodes and "GameEngine User Input Codes" using the "xlatb" opcode (yeah, pointless, we can do our own addressing, why DOES this opcode exist? Is it faster? I just felt like using it).


It exists because it is a useful atomic operation in some OS internals. Together with the lock prefix, you can do a simple table lookup that is thread-safe in a multi-CPU environment.
Posted on 2009-09-25 07:45:14 by Scali
Well, I might need to do my own addressing eventually if I want to support Chinese keyboards, and will deal with mutexing at that time. Already have some weapons pointed that way.
For now, I will be masking out VK codes to 6 bits, and using the upper two bits to encode the shift and control keys (SYSKEYS).

I can detect pretty much all the keys that a user might press, as well as the mouse buttons...

I won't be able to distinguish left shift from right shift, or uppercase from lowercase.
However, I accept this limitation for the time being and move on.


Posted on 2009-09-25 09:12:26 by Homer
About skinmesh animation, I had some problem weighted-mixing anims (how the resulting model looked and performed) :
- lerp of matrices was of so-so quality. Candy-wrapper artifacts, strange incompatibility with the dual-quat skinning shader on gpu.
- lerp of "vec3 pos" and "vec3 rot" and constructing a final matrix per bone via quat was unexpectedly horrible
Most engines use quat-slerp, but I'd rather not use several thousand trigonometry-instructions per mixed anim.

DualQuats saved my day, but I don't see anyone using them much, yet. Specifically the ability to weighted-accumulate ( DQuat1+= DQ2*0.3) looks under-utilized. Is there something one should avoid about DQs ?
My current code computes the final local-space bones via DQs, then converts each DQ to Mat4, then appends those Mat4 for the hierarchy, multiplies by the inverse Mat4 of the bind-pose, and finally converts the resulting Mat4s to an array of DQs to be used in the shader. Is there a way to do the hierarchy-transforms only via DQs, instead of temporarily converting to Mat4 ?
Posted on 2009-09-25 13:30:48 by Ultrano

Do your animation keyframes only drive rotation and translation (ie, no scale, no shear)?
If so, I can see absolutely no reason to convert to matrices, given that you are not passing matrix data to your shader. You can concatenate the frame transformations as quaternions (orientation) and vectors (translation). Don't use axis/angle orientation, use proper quaternions.

Careful painting of skin weights will improve artifacts due to animation blending.
These most often result from inconsistent skin weights exaggerated by unintended bone influences.

Try not to use too many #influences, you generally DONT need 4 or more bone influences.
Often 3, or even 2 is enough. More is quite unnecessary.

Posted on 2009-09-25 20:20:51 by Homer
Thanks, now I'm sure I can get rid of Mat4 in the animations :D . Will get to implementing it at some time, as the code already removes all cos/sin/acos/etc instructions from the whole animation, and the final mix looks great ^_^ .
Yeah, the shader code supports up to 4 weights, dynamic branching. (ironically, it's just a few more instructions on a G80+ than 4 Mat4x3*vec3 multiplies)
Simplistic IK for feet of blended-animation now seems much more possible for me to make (via selective blending of pre-baked rotations).
Posted on 2009-09-26 00:23:53 by Ultrano
Happy to help.
I spent six YEARS trying to make skinmesh work, one attempt per year!
They are not the most simple thing in the world, but they are made of simple things.

Today I hooked up the Keyboard Abstraction Layer, it works like a charm.
I tried binding some new keys to existing command codes such as 'terminate application', and proved to myself that the User (of my Engine) can easily map and unmap even the default key bindings without problems.

I also reworked the code for the Console Exec(ute ScriptFile) command so that it resembles a primitive interpreter instead of a pile of switch logic.
That wasnt completed, but one side effect of this work was the creation of a new object called TextFile which derives from DiskStream and supports a 'GetNextLine' method, as well as returning meaningful errorcodes.

The Console is tracking which textfiles are currently open, so our scriptfiles can recurse other scriptfiles, but only if they are not already being recursed - this eliminates the possibility of reentrancy.
The interpreter does not exist yet, I will break at this time and speak to Biterider about implementing a serious script engine with access to class templates and such.
This will allow the user to write scripts which can manipulate the fields of object instances, and much, much more.

Script engines are a wonderful and powerful tool, but can easily be abused.
My first advice is to not script anything that happens once per frame.
Their best use is as event sources and event sinks outside of the main loop.
Posted on 2009-09-26 11:36:12 by Homer
Upgrading the Physics Simulator.

CollisionBody class now works like D3D_Mesh and D3D_SkinMesh - its a Reference object which keeps a collection of its own Instances.

Simulator stores CollisionBody objects, and CollisionBody stores CollisionBodyInstance objects.

The Simulator.CreateInstance method was modified to accept a given (possibly Derived) instance, which is what happens in D3D_MeshManager and D3D_SkinMeshManager.

Several methods of Simulator were modified to take (1) a pointer to an array of instance pointers, and (2) the #instances.
And this is important.
Although instances are internally managed under their ref objects, it is ASSUMED that you are keeping a list of instance pointers OUTSIDE THE SIMULATOR.
This allows the user to easily PARTITION THE SIMULATION by keeping several lists of instances.
I chose to pass ptrs to raw arrays of ptrs with a counter so that the Simulator object remains more friendly to non OA32 callers... more on that later.

My world is space-partitioned.
What partitioning scheme I used is not relevant.
If I make a list of physics entities that are inside each partition,  I can drive the simulation on a per-partition basis, using sublists of CollisionBodyInstance objects.
And if all the instances in a given partition are 'asleep', I can cull that partition from the simulation.

I cannot overstate how important this is, it can VASTLY improve the efficiency of the simulation.
Physics is very cpu-intense stuff, and my algorithms use exhaustive tests of pairs of candidates for collision testing, so it is logarithmically expensive to test for collisions.
The smaller your lists of potentially colliding bodies, the quicker the simulation will run, leaving more processor time for other stuff.


Posted on 2009-09-27 00:42:00 by Homer
Doing more NNAI research, man I cannot wait to put together a training ground and unleash two sets of AI critters - zombie antagonists, and screaming human prey!
The time is coming!
Posted on 2009-09-27 02:51:16 by Homer
As mentioned, I've been spending a few days cleaning up the object associations in order to cleanly implement the physics engine.
Here's a picture that roughly describes the associations between the local Player object and the GameEngine.
Attachments:
Posted on 2009-09-29 08:17:44 by Homer
I've chosen an interesting way to bring together the Visual and Physical components of the Player.

As you can see in my previous post, the Player class is Derived (inherits) from the CollisionBodyInstance class.
That means it can be driven directly by the Physics engine, which like our other Manager classes, supports the managing of user-supplied (and possibly Derived) instances of any managed reference object.

We can also see that the Player class embeds the SkinMeshInstance class.
The embedded skinmesh instance is managed by SkinMeshManager.
That means we don't ever need to animate or draw the player.... the manager will do it.

The Player class applies User Input to manipulate the Physical state, and ensures that the Visual object remains synchronized to the Physical object.

The physics engine drives the position and orientation of the Player, and as mentioned, animation and rendering is automatic, since the Visual object is also Managed.

The less that the application programmer needs to do, the better.
Posted on 2009-09-29 10:15:15 by Homer
Now things get a little more interesting, because we need to start culling stuff.
For example, we don't want to animate every skinmesh instance in the world on the client.
Most of them are not even NEARLY in view.
But we can't just ignore them, because D3D doesn't let us Set the AnimationController time.
We can only advance by some amount.
So we need to track, per UNRENDERABLE SKINMESH INSTANCE, how long the instance has been offscreen, and account for that time when the instance DOES become visible.
OR, we could keep track of how long we've been rendering altogether, and advance by THAT much when the instance is newly visible. See how much more efficient that is?

And we can't just drive our camera (or its parent) if we are in a multiplayer game.
The client needs to tell the server what 'player controls' are active so that the Server (not the client) can determine adjustments to the physical state of the whole world, and then alert the player(s) about changes in the physical state of the world.

As I said, things are more interesting now.
I really need some way to interface the "game logic" to my game engine, rather than hardcoding it.
The obvious way is through a set of standard application interfaces, such as COM.
Perhaps there is a better way through carefully marshalled use of an external script engine.

Posted on 2009-09-30 09:47:13 by Homer
Well, I suppose the simple answer is: don't rely on D3DX, but roll your own.
It's always going to be a tad difficult to have completely arbitrary animation relative to time... Your animation will usually consist of a set of interpolation control paths. So when you specify a random time, you have to figure out in which part of the path you are, before you can decide what and where to interpolate to get the exact position from the control points.
So to avoid complete reinitialization of the interpolator everytime, you usually want to have some sort of stepping mechanism.
At the very least, you could do a simple check to see if you are still in the same bit of control-path as you were last time (simple bounds-checking should do), so you only re-init when you actually make a switch.
Posted on 2009-09-30 10:23:57 by Scali
Today I'm thinking about what an interface between GameEngine and the Application might look like.
I have some experience with this problem due to my work developing OA32's networking engine.
Primarily, we want an Application Event Sink so that GameEngine can push event notifications to the application.

The question is: what kind of GameEngine events should a Game Application care about?
By creating a list of these, I can A) design the aforementioned event sinking interface, and B) create a Design Document to guide further development of GameEngine with the end-user in mind.
So far, development of GameEngine has been heavily based apon the constraints imposed by OA32's D3D application framework, which does support event virtualization via custom WM's, but I don't want to be posting events via the WM queue, I would much rather use a direct callback interface in order to avoid 'WM queue flooding'.And besides, its about time I got my sh*t together and put together a battle plan, since GameEngine has become so large and complex, and I want to hide all that from the end-user, and I want to work to a schedule rather than the ad-hoc approach I've taken so far - that works for small projects, and this is not a small project any more.

Please note that I DO have working demos, I am not pipe dreaming about some code, I am talking about taking control of the erstwhile-chaotic design and implementation pipeline.



Posted on 2009-10-03 01:44:58 by Homer
Spent one day extending NetComEngine to support all the socket types defined under Windows.
Tested the changes using an updated 'NetComEngine Client Demo', and the OLD 'NetComEngine Server Demo', just to prove that my changes are sane and workable.

As mentioned in another thread, I did this in a unified way, by replacing WSASend/WSARecv calls with WSASendTo/WSARecvFrom calls (the latter work for any kind of socket, the former are only good for tcp streams).

Actually it took a lot less than one day, more like 15 minutes.
I had predicted the changes would take 10 minutes, so I must have been moving too slow :P

Anyway, the result is that OA32's networking engine now supports all defined socket types:
-Raw (IP layer packet interface)
-UDP (Unreliable, Unordered Messages)
-UDP (Reliable, Unordered Messages)
-UDP (Reliable, Ordered Messages)
-TCP (Reliable, Ordered Byte Stream)

And if at some future time Microsoft decides to support new socket types (such as SOCK_SCTP) the engine is ready to support them too.
Posted on 2009-10-05 01:56:30 by Homer
I'll be spending a day or two moving GameEngine into a DLL build.
This will give me a solid reason to implement the Application Event Sink I mentioned previously, as well as making GameEngine immediately more appealing to programmers from all walks of life.
Better still, it allows me to cleanly separate 'game logic' from 'engine proper'.
Posted on 2009-10-06 02:27:20 by Homer
Well, I've finished shovelling GameEngine into a DLL, next I have to write a small exe which uses the DLL, and remove any non-essential democode from GameEngine's core. That's why I estimated it will take a day or two.

The DLL exports just one function, which normally does not return until the user quits the app.
You might wonder where is the opportunity for executing user code, and what is the point of the DLL?
The user's executable supports an event-sinking interface whose methods are called by GameEngine.
It is within these methods that the user implements the 'game logic'.
This will typically involve calls (from the EXE's event sink) back into the DLL (GameEngine methods), and to methods of any GameEngine subengines and other components.

Since GameEngine looks after all aspects of the 'application framework', the user/developer can concentrate on just writing the logic to drive the game, and not worry about anything else.
I keep saying "the less the user has to know and do, the better"..
Posted on 2009-10-06 06:36:34 by Homer
Found a pretty serious problem with OA32's marshalling of 'Window Messages' to virtual event handler methods. Well I think I did, I hope I am doing something terribly wrong!

Basically, when a WM fires, OA32's application framework marshals a call to a standard method which takes three parameters (one of which is hidden to the coder).
The first parameter of the call SHOULD be a pointer to the Object Instance.
I have discovered that, instead, it is pointing to the Object Template, ie, TPL_CLASSNAME.
That works fine if the template is stored in the executable file, and/or we are working directly with that template rather than Instancing (which, granted, is a workaround that I could use...)
But if the template is stored in a DLL, and given that the Instance is in heap memory, we will be addressing DLL memory instead of the heap-based memory of the object instance that we made.

This problem does not affect applications that directly use a templated class containing such event handlers - but it does affect applications that instance such classes, so we could not use this for a Dialog class, therefore this goes a lot deeper than just posing a problem to DLL implementations.

I hope Biterider is watching this :)

Now - since each Instance is associated with exactly one Window, the solution would seem to store the instance pointer as a WindowLong data member which can be easily retrieved when marshaling WMs.
Unfortunately, that can't be done until the Window has been created, so ideally this would be automatically done for the user in any of the chain of ancestral Init methods, WinPrimer.Init being the obvious candidate.

Posted on 2009-10-06 08:47:44 by Homer
Hi Homer
Windows messages are send to the corresponding object instance using the objects Dispatch methods, which is defined for example in the SDI object. As you pointed out, the Startup method is only capable to initialize the object template. This is why the Self pointer is pointing to the object template. As soon as you have created a window, the message pump sends the corresponding messages to the created object instance.

In your case, the DLL builds a second template chain, which must be initialized the same way as the main app, using SysInit when the DLL is attached and finally SysDone when it is detached. An example can be seen here \ObjAsm32\Code\COM\COM_DLL.inc

Since the New and Destroy macros need to know about the object templates and their addresses, it may be necessary to implement specific (de)allocation stubs for the objects residing in your DLL.
Posted on 2009-10-06 13:59:53 by Biterider
Thanks for the reply.
Yep I already solved the problem.
For the benefit of those who completely did not understand that response, basically the problem was determined to be that the Window was being Created within the Startup method, which is a NO NO.
We should create our window within the Init method.
This 'feature' was present in legacy code, unfortunately.
It won't happen again, at least not to me :P
Posted on 2009-10-07 10:48:49 by Homer