Seems to work.


Works on 10.7/Lion.


Nice.
I've managed to get it to build in Xcode as well now. I believe that Xcode can actually build the .app for you automatically, but so far it can only successfully build the binary, and drops it in the build/Debug or build/Release dir.
Posted on 2011-07-12 19:34:07 by Scali

You can easily calc the frame time from fps, so it's no big deal really (Saw the fps-argument before, always thought it was nonsensical. Guess it's another internet hype. You're talking to a guy who's been doing gfx for over 20 years. You develop a natural feel for these things over time).
3000 fps: 1000/3000 = 0.33 ms per frame
5400 fps: 1000/5400 = 0.18 ms per frame
6000 fps: 1000/6000 = 0.16 ms per frame
10000 fps: 1000/10000 = 0.10 ms per frame


Yes you can calc, but it's 1/x, and yes when I see a number I too try and think of the other one.
I acknowledge that you're extremely skilled (not ironic here) (and not trying to compete in any way). But just because you have developed the ability to convert (and even feel the meaning) does not mean it's not a more relevant way of doing things.

* You can add milliseconds. You cant add fps. *
(or substract)
Addition is more easily wired in the brain.
You more hardly "see" OS or app loop overhead in an fps number (or any kind of overhead in the render).
*** You cant say "this optimisation won me x fps". You can say it won you x milliseconds. ***
(or this stack of effects will cost me x)
Papers describing effects and techniques now use milliseconds.

Fps has its use in the 2-120 range cause it expresses motion smoothness. It's an important measure when you finished your app but I think during development maybe it's the other way around.
Besides I still think a scolling graph showing a few hundred last frametimes (oooor fps if you want :) ) would be a very superior indicator than an fps numeric counter that either refreshes a few times a second, or moves so fast you cant see what's going on.
I suspect big modern engines have quite maany aaaawful ugly spikes (or drops) on such a graph, but that's just me.


In a way it's amazing to see just how fast a GPU has become these days. This scene has more polygons and more complex rendering than an entire Quake level. Yet the performance is determined more by the OS overhead than by the actual GPU rendering time.
But as I said, when you get into more 'realistic' framerates, these differences won't matter much.
For example, if you want to run at 100 fps, your frame time is 1000/100 = 10 ms. A ~0.3 ms margin between the OSes is negligible... Say you get 10.3 ms frametime instead, that would be 97 fps instead. Not a big deal really.


Exactly. But you're in the 2-120 range.

Anyway I'm not saying it's a huge deal, but still better know it and use it.
Regards
Posted on 2011-07-12 19:52:08 by HeLLoWorld
I glanced at the thread. Of course you know what I mean. Didnt mean to hijack it with know-it-all offtopic things, hope thats ok.

'night.
Posted on 2011-07-12 20:16:36 by HeLLoWorld

But just because you have developed the ability to convert (and even feel the meaning) does not mean it's not a more relevant way of doing things.


I never claimed otherwise.


* You can add milliseconds. You cant add fps. *
(or substract)


Which I didn't do.


Fps has its use in the 2-120 range cause it expresses motion smoothness. It's an important measure when you finished your app but I think during development maybe it's the other way around.


The fps counter is mainly there for that reason. It's an easy indication of how smoothly my app will be running.


Besides I still think a scolling graph showing a few hundred last frametimes (oooor fps if you want :) ) would be a very superior indicator than an fps numeric counter that either refreshes a few times a second, or moves so fast you cant see what's going on.
I suspect big modern engines have quite maany aaaawful ugly spikes (or drops) on such a graph, but that's just me.


I'm not sure about that, to be honest.
I mean, if you have spikes, they are either annoying enough that you don't need some kind of graph to point them out... or they are too small to notice in practice.
In the former case, you probably already know why they're there, and how to solve them (if at all possible... eg if you have to load new content at runtime, you will always have some kind of spike).
In the latter case, who cares?


Exactly. But you're in the 2-120 range.


Why do you think I picked 100 fps?
Anything over 100-120 fps is pretty much irrelevant. In fact, these days, with crappy flat panels, most computers can't even go over 60 Hz anymore, so anything over 60 fps would be irrelevant.
I mean, it's nice as a measure of which OS has the least overhead in OpenGL, but other than that it's completely meaningless whether you get 3000 fps or 10000 fps.
Right now I'm just aiming to make the framework as efficient as possible, then I will build on that.
When I'm doing a complete scene with all effects enabled, I'm only worried about getting smooth framerates. Not the highest possible framerates... as long as the framerate is over ~100, that means you can add more eyecandy. Eyecandy matters, not framerate.


Anyway I'm not saying it's a huge deal, but still better know it and use it.


I disagree... I can only put one counter in my application (especially if I put it in the window title... but even in the console it'd be harder to follow if a lot of numbers float by... and I can't really be arsed to put an entire separate window up with a graph etc... besides, that will have a considerable effect on performance). FPS is more 'intuitive' to most people (pretty much all games use it too). Besides, I only care about whether my stuff can hit smooth framerates.
Posted on 2011-07-13 03:20:01 by Scali


(like, you know, fraps)
(maybe an almost imperceptible overhead but well nothing is free)

I did this years ago on the "engine" i did for mobile phones. I was proud of it! I think I even could see the periodical spikes of my occasional renormalisation square root. Or maybe it was the garbage collector waking up, who knows. I stand my ground :). Microstutters, you know. You don't always really notice but still it reduces the quality of animation.
Besides, for benchmarking purposes, when the engine crawls (or not), it's way more descriptive than average/min/max fps  over a dozen of seconds.
Posted on 2011-07-13 18:56:10 by HeLLoWorld
Another concern I had is, are the milliseconds measured by software the real ones on the display?

First we know that LCD panels buffer, delay and postprocess frames, which maybe could introduce jitter.

Second, the graphic cards are more and more pipelined and desynchronised.
I read once that the video blit, even when it returns to the app, is now JUST A POINT IN A COMMAND BUFFER processed by the drivers or video hardware.
How precise is that stuff I ask?
Could aswell deliver bursts of 10 frames 10 times slower.

I also heard SLI introduced microstutters with a period of 2 or maybe much more.

So I thought of something great:
Like fraps, but instead of displaying an fps counter, you blink a little square black and white in sequence.
Then you put a a little device with a highspeed luminosity sensor on that part of the screen, and display a nice logging graph on a little lcd. I bet this would be awesome.

Tell me if I should stop hijacking your thread.
Posted on 2011-07-13 19:18:42 by HeLLoWorld

First we know that LCD panels buffer, delay and postprocess frames, which maybe could introduce jitter.


I think it's safe to assume that this delay is constant.


Second, the graphic cards are more and more pipelined and desynchronised.
I read once that the video blit, even when it returns to the app, is now JUST A POINT IN A COMMAND BUFFER processed by the drivers or video hardware.
How precise is that stuff I ask?
Could aswell deliver bursts of 10 frames 10 times slower.


Again, the delay should be pretty much constant. Effectively you may be measuring how quickly the buffer is emptied by the GPU, rather than how quickly it is filled, but that doesn't really matter that much.


I also heard SLI introduced microstutters with a period of 2 or maybe much more.


SLI and CrossFire just suck. But there's not much you can do about it as a programmer. It's all hardwired in the drivers, you have no control over it.

I think what you are looking for is more of a profiler than any kind of frame counter. Something like VTune is more useful for that than any kind of timer you'd write manually.
Posted on 2011-07-14 01:40:54 by Scali
I got rid of the CML dependency for GLUX now.
Posted on 2011-07-14 05:26:45 by Scali

I still have an old Celeron Northwood-based laptop with a Radeon IGP340M. I figured it'd be a nice idea to try and run it there.


Just did a test with the 0.1a release on this old laptop... I had merged the CPU fallback path into the BHM code before releasing, but never actually tested it.
I'm happy to report that the CPU fallback path is still in place and working like a charm (including the full skinning animation, visually pretty much identical to the GLSL and ASM versions, although the lighting is per-vertex rather than per-pixel obviously). I even get 63 fps out of the old machine.
(if anyone has been paying attention to the README.txt, I've also rewritten the required OpenGL functionality bit... This time it only requires multitexturing, anything else is optional... so basically it's OpenGL 1.2 spec I guess).
Posted on 2011-07-19 17:33:53 by Scali
I've had some fun with my OpenGL code...
Firstly I've hacked GLUT so that I could force an OpenGL 3.0+ core profile, which disables any deprecated functionality.
Then I modified my code so that it works only with non-deprecated functionality.

After doing that, the code was portable to OpenGL ES.
So I tried to port it to the iPhone, and here it is:
Posted on 2011-09-07 04:43:21 by Scali
I have come to a decision...
The current BHM3DSample is written with a lot of OpenGL legacy code. The advantage is that it can run on a wide variety of hardware. The disadvantage is that the code cannot run in an OpenGL ES environment, or with an OpenGL 3.0+ core profile.

I have already cleaned out the legacy code for the OpenGL ES version on iPhone.
I will use this version for the next version of the BHM3DSample. I will give up backward compatibility, but the code will be smaller and cleaner than it is now.
The backward compatibility is not really an issue: the current code is already in the SVN, so people can still access it. It is also in the earlier release.
So I will just add a section to the readme file that points people to the older version, if they are having trouble running the current one on their system.

Another thing I will be doing is to make a cleaner separation between GLUT and my own code. Since the iPhone doesn't have GLUT, I had to use an alternative framework there. By making clean entrypoints for the init() and renderFrame() functions, it will be much easier to adapt the code to run inside any OpenGL wrapper.
I may want to ditch GLUT anyway, since the real GLUT is old and abandoned, and the newer implementations I've tried, weren't that great.
I was told to check out GLFW, so I might just give that a whirl. Would be a good test to see if it really works without GLUT.

Another thing that the iPhone doesn't have is GLU. I only used the gluPerspective() function, but that was partly because of laziness. There should really be a proper function for that in the GLUX library. GLU and GLUT are just more pieces of legacy that should be stripped from my framework.
Posted on 2011-09-10 16:53:43 by Scali
Yes :)
Posted on 2011-09-11 03:39:08 by Homer
Code is now updated on sf.net.
If you are looking to replace some legacy GL math, you may want to look at some of the code here:
http://bhmfileformat.svn.sourceforge.net/viewvc/bhmfileformat/trunk/BHM3DSample/GL/GLUXMath.cpp?revision=52&view=markup

I have drop-in replacements for glFrustum(), gluLookAt() and gluPerspective(), as well as variations that resemble the D3DX-equivalents (for one, using radians instead of degrees... who ever uses degrees anyway? Also weird, from a computer engineering point-of-view: FPUs only work with radians (as do most programming languages)... using degrees means that you will need to transform into radians at some point, which is just useless extra overhead).

Edit: To clarify the above 'backward compatibility'. That only goes for things below OpenGL (ES) 2.0 spec. So it's still quite compatible. Basically as long as you have GLSL support, it should work.
It still works on my Intel X3100 IGP, which is exactly OpenGL 2.0.
It just won't work on hardware that only supports fixed function or assembly programs. The framework now forces you to use GLSL for everything, and unlike the old framework, mixing GLSL with legacy code is no longer guaranteed to work, and is not recommended.
Posted on 2011-09-11 12:22:04 by Scali
Had a quick look at GLFW... Doesn't seem to solve my main problem with classic GLUT: I cannot specify anything about the GL context it creates. This means I cannot force core 3.0+ profiles.
Some modern GLUT-ports do have that functionality. I had hacked it into the original GLUT source code, for the main reason that GLUT seems to perform much better than freeglut.
Posted on 2011-09-23 17:08:38 by Scali
I have my own version of GLEXT which I grow on demand.
It has little to no legacy junk.
And our framework allows you to request major and minor version :P
There's only two developers so we cater to ourselves, within reason.
So far, so good :)
Posted on 2011-10-18 04:06:51 by Homer
Yes, if you want something done right, you have to do it yourself :)
My problem is in the fact that I was aiming for platform-independent code.
GLUT is available on a wide variety of platforms, so people could easily recompile my code for their platform of choice.
If I make my own version, then I will have to maintain support for all platforms myself (or people will have to write their own port of it).

So in that sense it's a bit of a shame that there doesn't seem to be a nice standard framework that already does what I want.
However, freeglut introduces some extensions that do what I want (just not entirely in the way that I would want it). So perhaps I could just check for the presence of these extensions, and use them if available. And I could adjust my hacked version of regular GLUT to be compatible with freeglut's API, but without freeglut's apparent performance issues (as mentioned earlier: http://www.asmcommunity.net/board/index.php?topic=29830.msg211342#msg211342).
Posted on 2011-10-19 05:40:19 by Scali
Sounds workable :)
I've just implemented a scouring parser to snarf all the inputs to the VS against a runtime-extendable list of typedefs, rawkin it!
We're looking at the possibility of using Behavior Tree logic to drive the shader execution pathway! And stuff!
Posted on 2011-10-21 09:51:19 by Homer
The BHM3DSample is now also ported to Android:
http://scalibq.wordpress.com/2012/02/04/porting-bhm3dsample-to-android-some-well-a-lot-of-stressful-development/

Posted on 2012-02-04 11:45:52 by Scali
Good work! - what context? lol
Posted on 2012-02-08 23:26:53 by Homer
Oh, I think the blog mentions that actually.. OpenGL ES 1.0, since the emulator does not support anything else.
But I get your point from the other thread.
And OpenGL ES is a lot better than regular OpenGL in that respect.
I'll have to port the OpenGL ES 2.0 code as well, but I'd need access to an actual phone to test it.
Posted on 2012-02-09 04:45:48 by Scali