That's why it's irrelevant to count in fps instead of milliseconds here, the nonlinearity, the big numbers, the OS and app logic overhead, etc, cant be bothered to look for good articles describing this
FPS can have its use, but more in the 2-120 range I'd say, and still it's important to keep nonlinearity in mind.
You can easily calc the frame time from fps, so it's no big deal really (Saw the fps-argument before, always thought it was nonsensical. Guess it's another internet hype. You're talking to a guy who's been doing gfx for over 20 years. You develop a natural feel for these things over time).
3000 fps: 1000/3000 = 0.33 ms per frame
5400 fps: 1000/5400 = 0.18 ms per frame
6000 fps: 1000/6000 = 0.16 ms per frame
10000 fps: 1000/10000 = 0.10 ms per frame
Doesn't really change the initial impression you got from fps.
Not sure what your point is about OS/app overhead, since this should show EXACTLY that (the GPU was the same in all cases, so we can assume that the actual render time is constant).
The code is identical for all platforms. Windows XP runs much faster than Windows 7 simply because there's less overhead (probably mostly because it doesn't have Aero).
And on OS X we can clearly see how X11 has a lot of additional overhead compared to native OpenGL as well.
In a way it's amazing to see just how fast a GPU has become these days. This scene has more polygons and more complex rendering than an entire Quake level. Yet the performance is determined more by the OS overhead than by the actual GPU rendering time.
But as I said, when you get into more 'realistic' framerates, these differences won't matter much.
For example, if you want to run at 100 fps, your frame time is 1000/100 = 10 ms. A ~0.3 ms margin between the OSes is negligible... Say you get 10.3 ms frametime instead, that would be 97 fps instead. Not a big deal really.
However, I've seen cases that were far worse. Take for example my Intel X3100 IGP... It gets 360 fps in DX9 if I use software VP. It gets 185 fps in DX9 with hardware vp. It gets 130 fps with OpenGL using vertex/fragment programs, and it gets about 80 fps with GLSL.
These are differences in overhead you're going to notice. Intel's OpenGL implementation is just very spotty, even on Windows.
But the difference between OS X and Windows here is not such a big deal (at least, on an nVidia card), and that's good to know.