Spent a couple of hours over the weekend changing all my physics and related math support code, replacing all references to 'Vec3' with a new structure called 'Vector', whose definition can be either single or double precision. Also, all references to 'real4' and 'real8' have been replaced with 'SCALAR':


;We can use either single or double precision...
ifndef USE_DOUBLE_PRECISION
  SCALAR typedef real4
else
  SCALAR typedef real8
endif

;The 3D Vector struct used for Physics depends on our chosen Precision
Vector struct
x SCALAR ?
y SCALAR ?
z SCALAR ?
w SCALAR ? ;This element is not normally used by 3D vectors, but acts as Alignment Padding
Vector ends


This is probably overkill, but it does extend double-precision all the way through to the Matrix support code, which will be important for some parts of the new physics engine, especially for calculating the inertia tensor of an arbitrary mesh entity.
I also noted that the Bullet physics engine uses the tensor of a Box to approximate basically all of its shapes, even when the shape has a known tensor - why? The author claims that A) it doesn't make much difference to the simulation, and B) that due to his engine using a 'margin' to expand shapes (used in the EPA fine collision algorithm, I believe) that it's difficult to calculate tensors for composite shapes (even for something as simple as a capsule), I don't accept either of these excuses (what the hell does an expanded collision hull have to do with the dynamics of the PHYSICAL BODY???)

Posted on 2010-09-12 10:35:04 by Homer
It seems like "most" of the people who work with physics and who understand the math on a far deeper level than myself are not actually programmers.. let's see why.

The typical algorithm for multibody physics with collision detection goes something like this:


#1 - Integrate the entire system of bodies forward in Time by one whole TimeStep, thus calculating a "proposed new state" for the entire system.

#2 - Check for Collisions, and assuming there are any, find the earliest time of impact between any two bodies.

#3 - Integrate the entire system forwards again - but this time, to the earliest time of impact, which is somewhere within the current TimeStep.

#4 - Resolve the Collision and terminate, returning the actual time we advanced by, ie, our zero-based time of impact..... or alternatively, repeat the sequence from Step #1 with whatever "remaining time" we have left in the current TimeStep, rather than a WHOLE TimeStep.



This seems to be a safe, sane, rational algorithm, right?
Well it is, and for LOW NUMBERS OF BODIES, it works just fine.
But it's quite inefficient for higher numbers of bodies - we're integrating the entire system whenever a collision is detected, but did we really have to? I don't think so.

In fact, we only needed to adjust the bodies involved in the collision, and then re-test those bodies for SUBSEQUENT collisions resulting from resolving of the initial collision... allowing the REST of the Bodies in the simulation to theoretically cruise to the end of the timestep, as they would have if no collisions had occurred.



#3 - Integrate the COLLIDING PAIR of Bodies forwards  to their earliest time of impact.

#4 - Resolve the Collision, then advance the offending pair of Bodies AGAIN forwards, this time to the END of the current TimeStep (thus calculating a new "proposed state" for these two Bodies) - and then test them for collisions against the entire system (preferably, via accelerated broadphase testing).

#5 - If there is subsequent collision resulting from our impact being resolved, goto STEP #2
Otherwise, we are DONE - our system is safely integrated to the End of the current TimeStep !!!




Thank you for flying Homer Brainwaves !!

Posted on 2010-09-20 08:33:24 by Homer

There's only one catch with this scheme, and it only relates to the "swept tests" that I use as part of my broadphase testing, they work on the assumption that both objects being tested for collision are moving in constant Time - let's see an example.

We have two objects, A and B, and we consider their motion from Normalized time T0 thru T1.
Object A is involved in a collision at T=0.3, and we resolve that collision, thus object A has T=0.7 worth of time remaining to move AFTER the collision was resolved.
Object B was not going to collide with anything, but since object A has changed direction, we must now perform a test for collision between A from its collision point (T0.3 thru T1) versus B (T0 thru T1).

If we perform a "normal swept test" using the conventional algorithm for that, we will in fact be testing A from T=0.3 thru T=1.3 ... versus B from T=0.0 thru T=1.0

There are several solutions that are immediately obvious (to me).
Firstly, we could (theoretically, not Actually) move A backwards along its new trajectory by T=0.3, which may well place it in a bad position versus other Bodies.
Secondly, we could Scale the derivatives of A by (1.0 - 0.3 = 0.7), for the purposes of the swept test.
This means we really need to INTEGRATE it again just for the test, which we would rather avoid.
Finally, we can attempt to modify the swept test such that the quanta of B remain "normal", while the quanta of A are scaled down appropriately.

I don't know of any example of anyone having tried to do this, but it sounds like a fairly common sort of problem in linear algebra - I am not a very good mathematician but tonight I will try to modify the formula for a swept test of two spheres, where one sphere has a Time Dilation, which is effectively a constrained motion, relative to the other.
I'm not certain that time dilation is the correct term for this problem, I simply don't know a better / more appropriate way to describe the constrained time problem.

Perhaps a visual aid is in order.


Posted on 2010-09-21 02:35:00 by Homer

OK, at the most simple level, we have two swept tests which we're required to modify for this 'time bubble' scheme to work.
#1 is the Sphere/Plane Sweep
#2 is the Sphere/Sphere Sweep

For anyone who hasn't yet read this, see Page One and Two of this amazing article: http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php

MODIFYING TEST #1: Swept Test for SUBSEQUENT collisions with Planes
Now, please bear in mind that we are working under the assumption that a collision has already occurred during the current timestep, we have adjusted the state of the offending Body to the time of impact (Ui)
The first test involves finding the distance from the Plane to the Sphere Origin at times T0 and T1.
In our special case, the time at T0 is NOT ZERO - but the time at T1 is still the end of the current timestep. And C0 represents the position of the body at the collision we already resolved, rather than its state at the START of the timestep... let's presume that d0 and d1 are the respective distances from the plane at the two moments in time that we care for, and r is obviously the sphere radius:

Ui = d0-r / d0-d1
Ci = ((1-Ui) * C0) + (Ui * C1)

The second equation here is a straightforward LINEAR INTERPOLATION of the trajectory.
We simply need to adjust the value "1" to account for the normalized time already spent (Ts):
X = 1-Ts
Ci = ((Ts-Ui) * C0) + (Ui * C1)

That wasn't too hard, was it?


Next post we'll look at how to modify the sphere/sphere sweep to account for time dilation.
You should already see a clue given by Gomez in his article, where it states something like "since Time is constant for both bodies, U will have the same value for both bodies" - it should be clear already how we will proceed, but I don't want to work on the presumption that one of the two bodies has never been involved in a collision during this timestep - it's possible that BOTH bodies were already involved in collisions, and have different "starting times"... This will allow us to consider "shock propagation" examples which would otherwise be HUGELY expensive to resolve (using a typical iterative solver).
This stuff clearly relates to Stacking and MicroCollisions, topics which are avoided by other authors due to the notorious stability issues that can arise (friction can appear to be ignored, for example, if we don't properly handle subsequent collision within the same timestep).

Posted on 2010-09-21 22:19:31 by Homer
Semi-related to this topic:
Bullet has released version 2.77, which includes some early OpenCL and DirectCompute PhysX effects:
http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?t=5681

What I find especially interesting is how they tried to integrate the CPU and GPU routines into a single framework in a very seamless way. They can basically 'hot-plug' CPU or GPU routines into any stage of the physics simulation.
Posted on 2010-09-29 02:08:48 by Scali

That's interesting, since they deprecated the gpu stuff in Bullet 2.75.
Reading back from the gpu is prohibitively slow in a realtime 3D simulation, because the gpu needs to be completely idle, effectively stalling any pending rendering operations. And using the gpu to accelerate physics in a realtime 3D simulation is only viable when the gpu is not doing much - again, not suitable for Bullet's stated purpose as a game-oriented physics engine.

I am sure that the gpu is valuable for heavy-duty number crunching, but Bullet is NOT a high-quality physics engine, its main aims are speed and 'believable' results. So I don't see a lot of Bullet users opting to use the gpu for this purpose.

In regards to Bullet's continued support of the Open Source community, I note that the sourcecode for the CUDA module is not Erwin's work - its copyright notice states that it belongs to Sony Corporation!!
We've learned recently that 'Open-Sourced' has a very rubbery meaning in the corporate sector.
As far as I know, it's the ONLY submission to come from a corporate entity.

In v2.75, this module was located in the Extras folder - is it still in that folder?



Today I'm trying to make hard decisions about NarrowPhase collision detection and contact generation - Bullet typically uses a GJK variant to generate exactly ONE contact per frame, so it can take FOUR FRAMES to generate enough contacts for stable stacking, for example.
This is not so bad since Bullet uses variable timesteps (the next collision dictates thesize of the next timestep), however it's a poor approximation of a true Contact Manifold... it would be better if it attempted to detect ALL contacts during a frame, not just "the deepest contact from the earliest collisionpair".
And when it's not using a GJK-based narrowphase, it's performing a bruteforce test of ALL VERTICES in a pair of Bodies, ignoring any chance to optimize the testing based on knowledge of the geometry and orientations.
Furthermore, as previously mentioned, Bullet uses the inertia tensor of a Box for ALL SHAPES !!!
This is just wrong, and leads to quite incorrect dynamics for things like Cones (or any shape which is not "well-balanced").
The system I'm working on supports COMBINING rigid bodies into aggregates with accurate tensors.
Bullet is simply not capable of this kind of thing... it's a "BSpace-based" simulator, which expects that  its tensor is Diagonal in bodyspace (the center of mass of any Body coincides with its bodyspace origin, and the body is oriented such that its mass is equally distributed over its major axes).
Broadphase testing is accelerated via a dynamic aabb tree, but the heuristic used guarantees that the tree is a LEAFY, BINARY one, with no mechanism for maintaining any semblence of balance, it can and does lead to results that are WORSE than simply bruteforcing every possible pair!
I'm using an Icoseptree heuristic, based on some exhaustive testing and comparison of various possible heuristics... any Node can contain bodies, so we don't end up with large bodies in the leaves along with smaller ones, and when traversing (querying) the tree, we encounter larger proximate bodies earlier in the testing.
And I'm leaning toward using knowledge of the shapes to improve the narrow phase, with an EPA-based iterative solver used *only* on COMPLEX POLYTOPES such as PAIRS of Mesh entities.
For example, to intelligently test for collision of a Sphere and a Mesh, we can transform the Sphere into the BodySpace of the Mesh, use the GJK Support function to find the closest point on Mesh to the origin of the Sphere, then perform a cheap distance-based test of that point and the sphere.
In this case, we are transforming the more simple shape into the space of the more complex one, but other pairs can most efficiently be handled by other means.



Posted on 2010-09-29 03:33:59 by Homer
I'm strongly considering moving all the hardcore physics stuff onto the Game Server, making the server authoritative over collision detection.
The client would only need to make eulerstep-based 'predictions' about relative motions, and would rely totally on notifications from the Server for collision events, which seems fair since the server is already responsible for issuing 'physics state correction' packets for the entities on a client's screen...

This may mean writing SEPARATE PHYSICS ENGINES for the client and server, with the clientside engine being a stripped-down version whose major purpose is to generate nice interpolations of the renderables with respect to Time, noting that rendering framerate is always much higher than physics framerate. The server needs to ship with the game for the purposes of SinglePlayer and Hosted games, but when playing MultiPlayer (which is typical) we don't need this module to be loaded at all.
By offloading more work onto the Server, we reduce the opportunities for CHEATING, as well as freeing up valuable clientside cpu and gpu time for other stuff - like candy for the eyes and the ears ;)

It's also HIGHLY NOTABLE that under this proposed scheme, the Server doesn't actually need to render ANYTHING AT ALL (it doesn't even need an application window, in theory), so we *CAN* take advantage of gpu-based acceleration in this case!


Posted on 2010-09-29 03:45:02 by Homer

Reading back from the gpu is prohibitively slow in a realtime 3D simulation, because the gpu needs to be completely idle, effectively stalling any pending rendering operations.


I'm not sure if that is still the case anymore. nVidia did a lot of work on making concurrent kernels work more efficiently, and they also use a unified memory model now.
Aside from that, I think you can avoid reading back from the GPU by simply using the shared pool, letting the GPU output its data directly to system memory. Since physics doesn't require such high-bandwidth output, I guess that would be a very nice option.

And using the gpu to accelerate physics in a realtime 3D simulation is only viable when the gpu is not doing much - again, not suitable for Bullet's stated purpose as a game-oriented physics engine.


Again, see above regarding concurrent kernels.
They key point here is that a GPU has about 10 times the processing power of a high-end CPU.
So that would more or less mean that if you sacrifice 10% of graphics performance, you already match the physics processing power of a high-end CPU (dedicated to physics that is, where in practice it also needs to drive other game logic as well as the graphics, audio etc hardware).

Take that further, and go with 80% graphics and 20% physics, and you've reached the point where your physics are better than possible on any CPU, while still having acceptable graphics performance... etc.

At any rate, nVidia's PhysX work in games so far has already convinced me that GPU-accelerated physics work, and work better than a CPU-based solution (especially since I only have a dual-core... and the only thing it can't really handle is these physics in games).

In regards to Bullet's continued support of the Open Source community, I note that the sourcecode for the CUDA module is not Erwin's work - its copyright notice states that it belongs to Sony Corporation!!


Erwin Coumans works for Sony Corporation. Bullet is also primarily being developed for the PS3, which is also its biggest market. It plays virtually no role in PC games.
Posted on 2010-09-29 05:53:32 by Scali
Your post jogged my memory - I do now recall that Erwin works for Sony, I don't know why that small fact had slipped my mind, other than there being a complete lack of mention of Sony on the Bullet site.
And what you say about the gpu being 10x as powerful as the cpu bothers me - where did you pull that number from? Personal experience, or some benchmark?

Regardless, as I stated in my smaller secondary post, the gpu IS VIABLE, and especially so for games which rely on the Server to provide the 'muscle'. I'll remain dubious about using the GPU for single-player gaming until I can perform my own testing to prove or disprove that it's viable, but clearly it's ideal for a client/server topology.

I'm still inclined to use Spheres rather than Boxes as the broadphase primitive for collision detection, noting that I then adopt a dynamic aabb tree, where each BoundingSphere is wrapped in a box of fixed size... best of both worlds, methinks.

And I still lean toward detecting as many contacts per frame as possible, and also, handling multiple simulataneous collisionpairs within a single frame.

Bullet is used by a more impressive list of game developers than PhysX, for good reasons, but more notably, it is used by Disney Studios for movies !!! In fact, Disney wrote a Bullet plugin for MAYA, to allow Maya users to replace the physics engine when rendering dynamic animated scenes.
I think you'll find that there's no PS3 version of Maya for developers, so they're rendering cutscenes on a PC or MAC, and just using the PS3 to play the movie.. not leveraging the PS3 for physics in this case.

For the record, I think that PhysX is absolute junk, there's no way I would consider using it for commercial stuff, nVidia certainly have not tried very hard there... even if the sourcecode WAS available, I doubt that there's anything in there I'd be interested in.
For me, the main value of Bullet is not the engine, nor the fact that its sourcecode is open, but that it attracts the attention of so many likeminded programmers, and gets them talking to each other, trading ideas both on the Bullet forum and even more outside of it. I've learned tonnes more from these simple exchanges of information than I have from any books, whitepapers or sourcecode.
Posted on 2010-09-30 01:41:40 by Homer

And what you say about the gpu being 10x as powerful as the cpu bothers me - where did you pull that number from? Personal experience, or some benchmark?


Those are the theoretical FLOPS ratings.
A high-end 6-core Intel CPU will rate around 135 GFLOPS.
The GTX480 is rated at about 1.3 TFLOPS, and the Radeon 5870 is rated at 2.7 TFLOPS(!).

Also, recently Intel made a study trying to debunk these claims of GPU manufacturers, and their conclusion was "They're only 14x faster than a CPU in most practical situations" (practical being cherry-picked by Intel obviously). After Intel realized what they'd just said, they felt pretty stupid about publishing the study :)
So even Intel agrees, GPUs are much faster than CPUs :)
See here: http://www.tomshardware.com/news/GPU-CPU-Kernel-CUDA-App,10735.html

Regardless, as I stated in my smaller secondary post, the gpu IS VIABLE, and especially so for games which rely on the Server to provide the 'muscle'. I'll remain dubious about using the GPU for single-player gaming until I can perform my own testing to prove or disprove that it's viable, but clearly it's ideal for a client/server topology.


Why? When things like PhysX and now Bullet already show that you can easily combine graphics and physics workloads on the same GPU?

Bullet is used by a more impressive list of game developers than PhysX


Is it? I can't name even ONE game on PC that uses Bullet.
I own a few games that use PhysX though.
I can't name any PS3 games either, because I'm not into consoles anyway... but that's beside the point. On the PC platform, Bullet is used by pretty much nobody, where PhysX is used by quite a few development studios, including what is possibly the biggest game engine at the moment: UnrealEngine3.

For the record, I think that PhysX is absolute junk, there's no way I would consider using it for commercial stuff, nVidia certainly have not tried very hard there...


Can we please drop the fanboy dribble? It's bad enough that I have to listen to this nonsense on enthusiast forums... I would think that assembly programmers were a bit more levelheaded than that... Especially ones that are working with physics themselves.
Posted on 2010-09-30 02:19:40 by Scali
I have already qualified these statements, Scali, you can search and see, if you like.
There are valid reasons for the things I said, it's not just fanboy dribble, especially since I don't espouse any successor, or even contender, I said its not good enough and I outlined why, almost 2 years ago I think.

Posted on 2010-09-30 06:52:18 by Homer
Erwin said he would address the tensor issue if anyone could prove it made a difference - I mentioned cone bodies, which is one of the shapes his engine supports...
If nVidia wanted to impress me, they lost their opportunity with the demos that ship with it.
I saw a couple of hundred spheres interacting and spheres disappearing from the system because it would cost frames to process them, and said that is not good enough, and it's not.
Posted on 2010-09-30 07:02:07 by Homer

I have already qualified these statements, Scali, you can search and see, if you like.
There are valid reasons for the things I said, it's not just fanboy dribble, especially since I don't espouse any successor, or even contender, I said its not good enough and I outlined why, almost 2 years ago I think.


And where can I find that qualification?
I doubt it would change my opinion though. I mean, PhysX may not be perfect, but I really haven't noticed huge differences between the three major APIs, being PhysX, Bullet and Havok. I also really don't think that your hobby project will be significantly better than any of them, so I don't really think you're in a position to call any of them 'junk'.
Aside from that, saying "nVidia didn't really try hard" simply shows me that you don't have a clue what PhysX is, and where it came from.
It started as a CPU-only library under the name of NovodeX. Ageia acquired it, added support for their PPU to it, and renamed it to PhysX. Then nVidia acquired it, and added support for their GPU to it. Neither nVidia nor Ageia gave the CPU code all that much attention, but the NovodeX code wasn't all that bad to start with (and at least you can't argue about the lack of SSE, because I haven't seen a single line of SSE code in anything you posted).
nVidia is actually actively trying to improve the CPU code. They've made SSE the default now, with the 2.8.4 SDK, and for 3.0 they're planning automated multithreading of PhysX workloads (currently the library is completely thread-safe, but the thread-management itself has to be done by the developer).
But I would say that the fact that PhysX works well enough for major games, even without multithreading and SSE is a sign that the algorithms themselves are designed pretty well.
Posted on 2010-09-30 12:04:29 by Scali
Regarding SSE: my engine *DOES* support SSE optimizations and multithreading, that is not open sourced.
I have stated my intentions to write a COMMERCIAL physics engine, I have clearly defined goals.
The open-source components of my engine will be posted to the ObjAsm library, but I'm keeping a few things to myself, as I believe I've got a few interesting algorithm variants of my own.
I've invited a few of these people to discuss my ideas in public, but they have not yet chosen to do so, and that is not my problem.
I'm quite keen to get some GPGPU based implementation working, but it's less important to me than the work of improving the algorithms themselves so far, and only in the past days has it occurred to me that I can move the entire workload (more or less) onto the server architecture and thus make it work for me (without it costing 20, 10 or any percentage of gpu time on the client).
The algorithms employed by PhysX are simply not good enough, anything more than 600 objects under my tests. Now you're going to say that a game usually doesn't need to track 600 physical objects, and that most games only integrate the onscreen entities, yes? Those days are over! That was the state of the art at the GDC 2001 conference (which I attended). That's a decade old! You are SO RIGHT about the gpu being a viable option, but I think *NOT* for singleplayer games, and I will not change my mind about PhysX, regardless of my high opinion of nVidia (despite the lack of competition, I actually do like this company and what they do, wish I had more hours per day to keep up with them).
I have been working with physics ever since that GDC in 2001 and I am still learning every day, and I will never stop looking for improvements to the accepted algos, I think we can always do better and every ten years or so someone proves this to be true. I'm going for my PhD shortly, so you can expect to hear some crazy stuff coming from me over the next few months as I prepare my thesis. Some of my ideas are quite new, others are just variations on existing themes, but I do not see my physics stuff as being some hobbyist work, sorry, you are wrong, I take it seriously, and I aim to make money from it.
Posted on 2010-10-01 06:02:34 by Homer
I am a huge fan of Havok and Crysis engines, by the way.
Erwin worked on Havok before he joined Sony :)
Much respect for this man, despite his relaxed attitude toward accuracy in game-oriented physics engines.

PS: I will make my SSE math function replacement code available too, just not my hard-won work on optimized collision detection and contact generation. I give away 99% of my code, because I believe in open source. But I need to eat too.
Posted on 2010-10-01 06:10:29 by Homer

I'm quite keen to get some GPGPU based implementation working, but it's less important to me than the work of improving the algorithms themselves so far, and only in the past days has it occurred to me that I can move the entire workload (more or less) onto the server architecture and thus make it work for me (without it costing 20, 10 or any percentage of gpu time on the client).


Well let me point out the obvious then: there's no such thing as a free lunch. Instead of taking up GPU time, you're going to be sending things over the network, having it calculated elsewhere, and then having the results sent back.
That is likely going to cause considerably more latency than any readbacks from videomemory. The end-result is the same: the CPU and GPU are waiting for some work to be completed, and cannot reach higher framerates.

The algorithms employed by PhysX are simply not good enough, anything more than 600 objects under my tests. Now you're going to say that a game usually doesn't need to track 600 physical objects, and that most games only integrate the onscreen entities, yes?


Hahaha, you don't seem to understand me and my personality AT ALL.
I'm the biggest technology enthusiast and boundary-pushing advocate you could possibly find (I'm an assembly programmer, a REAL assembly programmer. Someone who doesn't stop pushing until he has scraped every last bit of performance from his machine in every way possible).
No, what I'm going to say is that you're absolutely right, we need more detail in our physics.
I'm also going to say that you are absolutely WRONG about PhysX.
On my machine it can manipulate hundreds of thousands objects in realtime. Which is why it can also accurately model things like soft bodies, cloth and fluids, by using very detailed particle simulations. Something that CPUs simply cannot do in realtime, no matter how optimized your routines are...
Sadly outside of PhysX, nobody seems to do anything other than simple rigidbody and ragdoll physics. That's so 2001.

In fact, pushing boundaries is exactly why I like PhysX. I've been a fan of it ever since they introduced the PPU.

(despite the lack of competition, I actually do like this company and what they do, wish I had more hours per day to keep up with them).


Well, I think you should at least pay a LITTLE attention to nVidia's Cuda/PhysX work. Because they've already shown some nice examples of what their GPUs can do in single-player games.


I have been working with physics ever since that GDC in 2001 and I am still learning every day, and I will never stop looking for improvements to the accepted algos, I think we can always do better and every ten years or so someone proves this to be true.


I agree, there's always something that can be done better, you should never stop searching, else you're not going to find it.

but I do not see my physics stuff as being some hobbyist work, sorry, you are wrong, I take it seriously, and I aim to make money from it.


Fair enough, but at the very least, it's going to be difficult as a single developer to compete with a team of specialists at nVidia, Intel or Sony.
Posted on 2010-10-01 06:19:30 by Scali
With respect to network latency in physics simulations, see the work of Glen Fiedler (Australian game programmer based in LA) who has written extensively on this very subject, and proved to me that network latency is not the huge bottleneck that it appears to be, since the client can predict physics and be sent periodic corrections in order to remain synchronized and to prevent cheating. In a game simulation where there is a server that is authoritative, there will always be "heartbeat packets" to keep the clients in synch, Glen suggested a prediction/correction paradigm, I am wiling to extend his ideas, with respect to Visibility on the Client, I suggest that A) euler-based predictions are good enough given that we'll receive at least three corrections per second, and B) the client only cares about what it can actually see, so the Server can perform visibility determination and send (delta-compressed) corrections for the onscreen entities only... I've never heard of anyone else doing anything like this, and I think it warrants closer inspection. The idea came from a cheap and nasty physics simulator called Cyclone which only integrates the onscreen elements (obviously not suitable for most games).


I don't plan on competing directly with any of these companies because I don't intend to license the physics engine alone to third parties.. I'm picking some of the brightest brains on the planet in respect to physics programming (Drasco, Van den Bergen, Mirtich, Shagam, Chaney, Ericson, Johnson and others).
These are the people that are breaking new ground over the past decade, and it is their algorithms that are being adopted by nVidia, Sony and others.
There has been an enormous paradigm shift in physics simulation over the past ten years at the algorithmic level, and it's great stuff, not all of it is completely original (example: EPA is just an extension to GJK, but solves a completely different problem) however there is a lot of great recent work which makes the work of people like Euler look like it is 300 years old (because it is).
There has not been much REASON to expand on theoretical physics in the past century or so, because existing models were "good enough", and speed was never a consideration like it is today.
It is only since the advent of ROBOTICS that we divided physics into its component fields, for example, calling numerical integration a field of mathematics in its own right.
So "modern physics" can be dated back to around 1968, and I say again its just the past DECADE that we've seen an explosion of interest and innovation in the fields of dynamics and "computational geometry" (another relatively new term!).
We are witnessing the greatest advance in physics for 300 years, its exciting times!
There is still MUCH work to be done, it's wide open for the discovery of new techniques and advancing existing ones. With respect to the paradox of the expert, there's more chance of ME finding such things than those names I mentioned! And it costs me nothing to try, and it's a lot more fun than playing Sudoku :P
Posted on 2010-10-01 08:27:23 by Homer
Just stumbled upon an idea while discussing WebGL with a friend of mine:
The problem with WebGL is that it's too difficult for the average web programmer to use. They may be able to write simple JavaScript, but they have no idea about 3D graphics, the underlying math etc.

And there currently is little or no middleware available for WebGL. I think there may be a business opportunity in developing a physics library for WebGL. Perhaps something for you to consider?
Posted on 2011-07-01 10:27:11 by Scali