Some questions of a more management-like nature...
Would anyone like to be more actively involved in the development of this project? The eventual goal being an open source multiplatform CPU and GPU benchmark (and indirectly also a compiler and platform benchmark as you can recompile and port the sourcecode), more or less in the style of 3DMark and Unigine Heaven.
Speaking of which, does anyone know a good way to develop a simple GUI in a multiplatform way? Just a simple configuration/info dialog... more or less like what my D3D engine has.... so you can select simple things like resolution, AA/AF settings and that sort of thing (and present benchmark results back to the user).
One idea I have is to use .NET/Mono. I'm not sure how well Mono would integrate with a native application though... but if all else fails, the GUI could be separate from the actual benchmark... So for example, the GUI writes the configuration into an XML file, and then executes the benchmark, passing the XML as a parameter. The benchmark then writes the results into an XML file, which the GUI can read back.
Would anyone like to be more actively involved in the development of this project? The eventual goal being an open source multiplatform CPU and GPU benchmark (and indirectly also a compiler and platform benchmark as you can recompile and port the sourcecode), more or less in the style of 3DMark and Unigine Heaven.
Speaking of which, does anyone know a good way to develop a simple GUI in a multiplatform way? Just a simple configuration/info dialog... more or less like what my D3D engine has.... so you can select simple things like resolution, AA/AF settings and that sort of thing (and present benchmark results back to the user).
One idea I have is to use .NET/Mono. I'm not sure how well Mono would integrate with a native application though... but if all else fails, the GUI could be separate from the actual benchmark... So for example, the GUI writes the configuration into an XML file, and then executes the benchmark, passing the XML as a parameter. The benchmark then writes the results into an XML file, which the GUI can read back.
You can make a separate config GUI app which simply saves an INI file, or something like that and then start your main app.
It's apparently not common knowledge that Mono supports VB, thought I'd mention that.
But to answer your actual question, I quote:
Windows.Forms
This is part of the standard Mono distribution.
This is a work-in-progress effort to implement the Microsoft System.Windows.Forms API. The API is not complete enough for many tasks, so developers (in particular third-party developers that provide custom controls) resort to use the underlying Win32 subsystem on Windows to provide features which are not exposed by Windows.Forms.
In some cases it is necessary to provide support for the Wndproc method and the various messages associated with it, and support extra functionality available in Win32 through P/Invoke, as well as exposing and sharing various kinds of handles (fonts, drawing contexts, drawing surfaces, windows).
Apparently, not very well.... even on Windows, it can't support Windows GUI stuff very well, depressing but its early days.
But to answer your actual question, I quote:
Windows.Forms
This is part of the standard Mono distribution.
This is a work-in-progress effort to implement the Microsoft System.Windows.Forms API. The API is not complete enough for many tasks, so developers (in particular third-party developers that provide custom controls) resort to use the underlying Win32 subsystem on Windows to provide features which are not exposed by Windows.Forms.
In some cases it is necessary to provide support for the Wndproc method and the various messages associated with it, and support extra functionality available in Win32 through P/Invoke, as well as exposing and sharing various kinds of handles (fonts, drawing contexts, drawing surfaces, windows).
Apparently, not very well.... even on Windows, it can't support Windows GUI stuff very well, depressing but its early days.
Yes, I know WinForms has its limitations (which will probably never be resolved, since it seems to be superceded by WPF, which doesn't work on Mono at all, afaik)... but I might be able to get away with it. I don't need much... just a few textboxes, some checkboxes, some dropdown lists perhaps.
Alternatively I could use something more cross-platform... but what? Qt? wxWidgets? Java?
Come to think of it... Java wouldn't be all that bad, really. I'd have a good IDE to design the UI visually (Netbeans), and I know that it will look the same on every OS. I guess it'd be better than .NET/Mono at any rate.
Any other ideas?
Alternatively I could use something more cross-platform... but what? Qt? wxWidgets? Java?
Come to think of it... Java wouldn't be all that bad, really. I'd have a good IDE to design the UI visually (Netbeans), and I know that it will look the same on every OS. I guess it'd be better than .NET/Mono at any rate.
Any other ideas?
GTK.NET? :)
GTK.NET? :)
Hum... I don't know... somehow I get more of a warm fuzzy feeling with Java than with GTK.NET, but that could just be me.
I think the Oracle takeover has not quite sunken in yet either... Else I would be getting cold chills from Java now.
So yea... you really think I should go GTK.NET? Or maybe just C++ GTK? Not sure how nicely that works on Windows... as long as it's portable it's fine by me. Doesn't have to be bytecode based like .NET or Java.
Haven't looked at gtk.net myself, but afaik it should work just fine on windows - and thus it should be a viable option for portability. Whether it sucks, I don't know :)
Well, to be honest, I'm more concerned about the .NET part than about the GTK part.
I did some more work on the XML processing, and now it can play back the original Croissant 9 scenes:
http://www.youtube.com/watch?v=0stuufHTAyI
I now need to make the materials and textures OpenGL-compatible in some way.
Then I'll have to make a music player, and implement some basic benchmarking functionality... and then I can go from there... making the lighting and shading more up-to-date.
http://www.youtube.com/watch?v=0stuufHTAyI
I now need to make the materials and textures OpenGL-compatible in some way.
Then I'll have to make a music player, and implement some basic benchmarking functionality... and then I can go from there... making the lighting and shading more up-to-date.
Haven't done too much work on any of the 3D stuff lately... but I did port the OpenGL replay code back to the D3D framework as well.
It was nice to see that the D3D framework is now API-agnostic enough that after I got the scenes playing in D3D11, recompiling for D3D10 and D3D9 worked right away.
I'd still like to abstract some things a bit better. I may decide to abstract it enough so that the OpenGL and D3D frameworks can be merged. However, currently the different matrix order is still a bit of a problem. I could avoid this problem completely if I were to work 100% shader-based. In that case, the matrices will be little more than variables passed to a shader, as far as the API is concerned, and I can do all matrix-math the same for all APIs (with the exception of the dreaded difference in clipspace coordinates, but that may be solvable inside the shader as well).
However, so far I think I feel more for just having 'API-oriented' mathematics for both D3D and OpenGL, and abstracting it at a higher level, eg by having a set of functions which will generate/process matrices in the right way for the right API.
Before I do any of that, I'd need to make my OpenGL framework a tad more object-oriented. After all, D3D is fully object-oriented, so you will need access to the D3DDevice object and such, for a lot of things. Since OpenGL is a global state machine, you don't need to pass a pointer around. But I might just do that anyway, because it makes the two codebases more alike. And it will make the OpenGL codebase more consistent as well, since I have wrapped most OpenGL resources into reference-counted classes, but not the main OpenGL state.
Anyway, it's fun to see that D3D9 is still king when it comes to replaying these simple old scenes. Apparently after all these years, it is still the fastest API around.
It was nice to see that the D3D framework is now API-agnostic enough that after I got the scenes playing in D3D11, recompiling for D3D10 and D3D9 worked right away.
I'd still like to abstract some things a bit better. I may decide to abstract it enough so that the OpenGL and D3D frameworks can be merged. However, currently the different matrix order is still a bit of a problem. I could avoid this problem completely if I were to work 100% shader-based. In that case, the matrices will be little more than variables passed to a shader, as far as the API is concerned, and I can do all matrix-math the same for all APIs (with the exception of the dreaded difference in clipspace coordinates, but that may be solvable inside the shader as well).
However, so far I think I feel more for just having 'API-oriented' mathematics for both D3D and OpenGL, and abstracting it at a higher level, eg by having a set of functions which will generate/process matrices in the right way for the right API.
Before I do any of that, I'd need to make my OpenGL framework a tad more object-oriented. After all, D3D is fully object-oriented, so you will need access to the D3DDevice object and such, for a lot of things. Since OpenGL is a global state machine, you don't need to pass a pointer around. But I might just do that anyway, because it makes the two codebases more alike. And it will make the OpenGL codebase more consistent as well, since I have wrapped most OpenGL resources into reference-counted classes, but not the main OpenGL state.
Anyway, it's fun to see that D3D9 is still king when it comes to replaying these simple old scenes. Apparently after all these years, it is still the fastest API around.
Right, I've done a bit of work on cleaning up both the OGL and D3D code, and made a start with wrapping the GL/GLU/GLUT code into an object. In theory I can now make the basic interfaces for my objects the same for OGL and D3D.
But from a more practical point-of-view, the following things need to be implemented, for a complete demo:
- Ogg replayer
- Material/texture/shader managing system
- Shadowmapping implementation (okay, the original Croissant 9 didn't have that, but I did add it to my Java engine later, and I also had it in my D3D9 engine. I think it's time for a more up-to-date version now, for both OGL and D3D).
I'll probably think of more as I go along, but for now it should give me some direction.
But from a more practical point-of-view, the following things need to be implemented, for a complete demo:
- Ogg replayer
- Material/texture/shader managing system
- Shadowmapping implementation (okay, the original Croissant 9 didn't have that, but I did add it to my Java engine later, and I also had it in my D3D9 engine. I think it's time for a more up-to-date version now, for both OGL and D3D).
I'll probably think of more as I go along, but for now it should give me some direction.
I think the projection matrix is pretty much the only matrix that is not the same in D3D and OpenGL, because of the different 'device space' coordinate systems.
I'd like to add another difference: The texture coordinate space is different as well. The direction of the V-axis (or well, T-axis, as the OpenGL world prefers the (S,T) naming instead of (U,V), I believe) is inverted. So basically where you'd use V in D3D, you'd use 1.0 - V in OpenGL, and vice versa.
Right, slight change of plans.
Firstly, I have teamed up with an old friend of mine, who is a VJ. He is working on his own VJ software, as he is not satisfied with the software on the market today. He wants to be able to make non-linear projections, but without having to pre-render everything for a specific configuration. So the solution to that is mapping the video content onto 3D meshes.
We've decided to try and adapt my 3D renderer to that need. I have made a small proof-of-concept where I use DirectShow to stream a video onto a texture and map it onto a rotating plane.
He will be developing the user interface for the VJ tool, and I will assist with the renderer and other backend stuff (we will probably want to have a distributed network system, so we can add extra machines to drive more screens/projectors).
I also spoke to Maali (the artist who did the original design and content for the Artnouveau and Croissant 9 demos), and he would like to do a new demo. But what he would like is a more WYSIWYG-approach to demo making. A sort of demo editing tool (more or less like demopaja, or even flash, if you will).
I hope to combine the two efforts, by designing the VJ user interface in a way that is flexible enough to also cater for the needs of demomaking. The two should have a reasonable overlap.
Firstly, I have teamed up with an old friend of mine, who is a VJ. He is working on his own VJ software, as he is not satisfied with the software on the market today. He wants to be able to make non-linear projections, but without having to pre-render everything for a specific configuration. So the solution to that is mapping the video content onto 3D meshes.
We've decided to try and adapt my 3D renderer to that need. I have made a small proof-of-concept where I use DirectShow to stream a video onto a texture and map it onto a rotating plane.
He will be developing the user interface for the VJ tool, and I will assist with the renderer and other backend stuff (we will probably want to have a distributed network system, so we can add extra machines to drive more screens/projectors).
I also spoke to Maali (the artist who did the original design and content for the Artnouveau and Croissant 9 demos), and he would like to do a new demo. But what he would like is a more WYSIWYG-approach to demo making. A sort of demo editing tool (more or less like demopaja, or even flash, if you will).
I hope to combine the two efforts, by designing the VJ user interface in a way that is flexible enough to also cater for the needs of demomaking. The two should have a reasonable overlap.
Ogg replayer works again. Pretty funny to hear the original soundtrack while the original scenes are playing. It has a very familiar feeling to it :)
Ogg replayer stopped working again. Not my fault (although..)... I had updated to the latest FMOD EX, and for some reason it now complains about the format of my croissant9.ogg file. The file hasn't changed since 2003, still the original file used in the demo. It also still plays in Windows Media Player, so it's not a configuration issue.
I've tested with another .ogg file, and that one played. I could also play .mp3 files. So at this point it looks like the new FMOD introduced an incompatibility with certain .ogg files.
I guess I'll have to file a bug report.
I've tested with another .ogg file, and that one played. I could also play .mp3 files. So at this point it looks like the new FMOD introduced an incompatibility with certain .ogg files.
I guess I'll have to file a bug report.
The past few days I've been trying for a different kind of 'remake'...
A quick-and-dirty port of the original Java code to Android. Since Android is not Java (it only implements a subset of the Java API), I still had to rewrite quite a bit of code. Both the audio and the framebuffer-related code had to be rewritten from scratch.
Once that was done, I could run the software renderer on top of it with only a few changes.
After some more hacking, I was able to play Croissant 9 on my Samsung Galaxy S Plus, with only a few parts still being buggy: http://youtu.be/JnZ8aLJbCGA
(apparently Android does not load the textures for the particles correctly, and you just get some random blended squares instead of nicely alphablended images).
The performance is quite depressing, I think.
The demo was originally written for a ~1.5 GHz singlecore PC with 512 mb. I tried it on my old Celeron 1.6 GHz (Pentium 4-based) laptop with 512 mb (minus 16 mb video memory), and it still runs quite smoothly on that, in the original resolution of 512x256.
The video I took was of a 1.4 GHz phone with 512 mb, and I had it running at 400x200 resolution.
Despite the lower resolution, it runs considerably worse. Aside from the average framerate being quite low, it also has very inconsistent framerates, and it clearly freezes every 1.5-2 seconds. That is probably the garbage collector being run.
For the Java version I already did a lot of optimization for best possible garbage collection, so I don't think there's a lot of room for improvement there. I think it's mainly the Dalvik VM that isn't doing a very good job, not even compared to the JVMs we had on Windows 10 years ago (when this demo was developed).
I also had to manually increase the stack size of the render thread to 128 kb for the Android version.
I also tried to port Artnouveau, but that demo uses a LOT of textures/backgrounds, and so far I have been unable to fit it into 512 mb.
Aside from that, I have lost some of Artnouveau's code over the years, so at this point the few scenes that I can fit into memory look a lot less like the original demo than the Croissant 9 conversion does so far.
A quick-and-dirty port of the original Java code to Android. Since Android is not Java (it only implements a subset of the Java API), I still had to rewrite quite a bit of code. Both the audio and the framebuffer-related code had to be rewritten from scratch.
Once that was done, I could run the software renderer on top of it with only a few changes.
After some more hacking, I was able to play Croissant 9 on my Samsung Galaxy S Plus, with only a few parts still being buggy: http://youtu.be/JnZ8aLJbCGA
(apparently Android does not load the textures for the particles correctly, and you just get some random blended squares instead of nicely alphablended images).
The performance is quite depressing, I think.
The demo was originally written for a ~1.5 GHz singlecore PC with 512 mb. I tried it on my old Celeron 1.6 GHz (Pentium 4-based) laptop with 512 mb (minus 16 mb video memory), and it still runs quite smoothly on that, in the original resolution of 512x256.
The video I took was of a 1.4 GHz phone with 512 mb, and I had it running at 400x200 resolution.
Despite the lower resolution, it runs considerably worse. Aside from the average framerate being quite low, it also has very inconsistent framerates, and it clearly freezes every 1.5-2 seconds. That is probably the garbage collector being run.
For the Java version I already did a lot of optimization for best possible garbage collection, so I don't think there's a lot of room for improvement there. I think it's mainly the Dalvik VM that isn't doing a very good job, not even compared to the JVMs we had on Windows 10 years ago (when this demo was developed).
I also had to manually increase the stack size of the render thread to 128 kb for the Android version.
I also tried to port Artnouveau, but that demo uses a LOT of textures/backgrounds, and so far I have been unable to fit it into 512 mb.
Aside from that, I have lost some of Artnouveau's code over the years, so at this point the few scenes that I can fit into memory look a lot less like the original demo than the Croissant 9 conversion does so far.