And over 85% of Steam users have an OpenGL3.2-compatible system :P
Posted on 2010-02-01 13:00:39 by Ultrano
.... <- sound of crickets chirping.
Posted on 2010-02-01 13:11:16 by Scali
*silent clap* :)
Posted on 2010-02-01 13:43:19 by Ultrano
So, how do you do custom MSAA resolve in DX10.0 :S ?
( I mean MSdepth-readback )
Posted on 2010-02-01 18:09:35 by Ultrano

So, how do you do custom MSAA resolve in DX10.0 :S ?
( I mean MSdepth-readback )


That's what DX10.1/11 is for. I don't understand the question.
Posted on 2010-02-02 01:50:13 by Scali
Hmm, aren't there quite over 200 million DX10.0 cards? (~25 mil are sold per quarter worldwide) And nVidia has over 66% market-share. Only the upcoming GF100 supports 10.1 , thus you'll limit your audience to ATi users for now. Just because of 2 mentionable features. This looks contrary to what developments you've been doing recently, on merging your 9/10/10.1/11 content paths.
MSdepth access is quite an optimization imho for i.e light-prepass (I like flexibility in the content delivery) .
Posted on 2010-02-02 14:12:21 by Ultrano

Hmm, aren't there quite over 200 million DX10.0 cards? (~25 mil are sold per quarter worldwide) And nVidia has over 66% market-share. Only the upcoming GF100 supports 10.1 , thus you'll limit your audience to ATi users for now. Just because of 2 mentionable features. This looks contrary to what developments you've been doing recently, on merging your 9/10/10.1/11 content paths.
MSdepth access is quite an optimization imho for i.e light-prepass (I like flexibility in the content delivery) .


Blahblahblah...
There's a difference between the API and the hardware requirements.
DX10.1 or DX11 API can be used on DX10.0 hardware. DX11 even works on DX9 hardware. I ran my DX11 engine on a Radeon X1800 card in downlevel 9.3 mode.
nVidia supports the use of multisample readback on their hardware.
Yet again you are caught on a lack of knowledge about Direct3D.
Posted on 2010-02-02 14:23:35 by Scali
Hold your guns, I specifically asked HOW it is done. Any link.
http://www.humus.name/index.php?page=3D&ID=81
A lot of discussions online about this, and all concluded MS depths are inaccessible in DXx now. Those posts may be old, obsolete info (like your knowledge on GL), but there are no indications things changed.
Posted on 2010-02-02 15:02:59 by Ultrano

Hold your guns, I specifically asked HOW it is done. Any link.
http://www.humus.name/index.php?page=3D&ID=81
A lot of discussions online about this, and all concluded MS depths are inaccessible in DXx now. Those posts may be old, obsolete info (like your knowledge on GL), but there are no indications things changed.


You didn't ask, you had already tried and convicted me ("Only the upcoming GF100 supports 10.1 , thus you'll limit your audience to ATi users for now", which is wrong anyway, as nVidia does have a few DX10.1 GPUs on the market, such as the 210, GT220 and the 300 series). Now don't go crying because you blew your mouth off before bothering to check the facts.
Again here, you judge my knowledge on GL... I don't even see the significance (did I ever claim to know anything about OpenGL? Are we even discussing OpenGL?).
But if you MUST know, via NVAPI you can enable most DX10.1 features on all nVidia DX10.0 hardware. You can just write DX10.1 code and use SM4.1 shaders.
I guess you missed the commotion that nVidia caused by implementing this form of MSAA in Batman: Arkham Asylum, but enabling it only on nVidia hardware.
Far Cry 2 is another wellknown game that uses this nVidia extension, but unlike Batman, it also works on any regular DX10.1 hardware.
http://www.anandtech.com/video/showdoc.aspx?i=3334&p=7
We know that both G80 and R600 both supported some of the DX10.1 featureset. Our goal at the least has been to determine which, if any, features were added to GT200. We would ideally like to know what DX10.1 specific features GT200 does and does not support, but we'll take what we can get. After asking our question, this is the response we got from NVIDIA Technical Marketing:

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."


You seriously need to curb your ego.

This post may not be too clear about it: http://www.asmcommunity.net/board/index.php?topic=29455.msg208792#msg208792
But still, it alludes to the fact that I used only DX10.1 back then, having already dropped support for vanilla DX10.0, even though I was still on my GeForce 8800GTS (and Intel X3100).
If you would look at the code I released at that time, you'll see that it is indeed DX10.1 code, not DX10.0.
My current DX11 codebase will try to run on DX9 hardware if possible. It should run on any SM2.0 hardware or better. With DX11 covering SM2.0+ hardware on Vista/Windows 7, DX10/DX10.1 no longer serves a purpose for me, other than for Vista users who haven't updated their system to DX11 yet. The actual DX9 codepath only serves a purpose for Windows XP. On Vista/Windows 7, I would recommend using DX11.
Posted on 2010-02-02 15:21:09 by Scali
My silence is deafening.
I've been busy :)
I will be moving away from Microsoft and DirectX.
Before I learned DX, I learned OpenGL.
Now, I am interested in cross platform programming without the step of porting sourcecode.
OpenGL is ready for me, and DirectX will never be, as it is tied down to one company's operating systems.
I feel that all my work in Direct3D etc has been a good learning curve, as the programming style is not very different, however I have wasted time enough on this roundabout.
I feel like I am coming home.
Posted on 2010-02-11 04:45:45 by Homer
That makes two of us then :)
I made a start earlier this week, with an OpenGL engine to act as an example for the BHM file format project. Since BHM itself is intended to be completely neutral to OS or hardware, it makes sense to write multiplatform examples for it aswell. Aside from that, I think D3D may be a tad too complicated to serve as just an example.
I got distracted by the recent problems with the Intel Q35 chipset, but now I can go back to OpenGL fulltime. I will probably be doing some, if not all of the actual development on my FreeBSD system.
Posted on 2010-02-11 06:22:09 by Scali
I'd like to know how you guys are thinking about managing the OpenGL extensions?

- implement multiple versions of functions that use and don't use them and then set the correct pointer at run-time?
- - Or use an OO approach and use polymorphism with child classes and virtual methods that are defined based on extension availability
- just use condition logic and have one version of the function that branches?
- or do you use a clever approach I haven't considered?
Posted on 2010-02-12 12:48:54 by r22
I lean toward the OOP approach, however a centralized enumerator has merit too (FSM).
Posted on 2010-02-12 16:59:03 by Homer

I'd like to know how you guys are thinking about managing the OpenGL extensions?

- implement multiple versions of functions that use and don't use them and then set the correct pointer at run-time?
- - Or use an OO approach and use polymorphism with child classes and virtual methods that are defined based on extension availability
- just use condition logic and have one version of the function that branches?
- or do you use a clever approach I haven't considered?


Now that my OpenGL framework design is pretty much 'final' (see this thread, it's open source, so you can freely browse my OpenGL code and borrow whatever you like from it), I can answer that from my perspective at least.
I mostly use an OO approach, where different child classes implement the same functionality using different extensions.
Obviously there will have to be some condition logic somewhere which creates the correct object based on the available extensions.
You also need some condition logic inside functions here and there... namely, for some operations you will specifically need to DISABLE certain extensions if available... but obviously that is not going to work if the extension isn't supported.
Since that is generally just one or two lines of code, I haven't bothered creating completely separate functions or objects for such cases.
Posted on 2010-05-02 08:30:02 by Scali
Thanks, I'll take you up on that as soon as I'm done with this writing my own assembler circus.
It's something I feel I need to do.
Posted on 2010-05-14 11:11:28 by Homer