Then in such a case, as I was trying to get across earlier, you implement each of those "massively different" parts as individual business classes which overload a generic that defines the requirements of any interface components and the outputs (if any) of the storage components.


Well, my point is that you can't know everything about that beforehand.
For example, if you design a software renderer, then all geometry can be stored in system memory, and you have total control over how you store your vertices, topology, meshes etc.
When you go from full software to hardware-accelerated drawing, you still store your geometry in system memory, but you will need to store your geometry in such a way that it can be handled by the hardware.
When hardware T&L was introduced, you no longer have the luxury of storing the geometry yourself. Instead you have to create a storage buffer through the API, and play by its rules.
Which all gets back to my point: you can't just design 'the rendering engine'. Different types of renderers will have different rules on what you store, and where, etc.
Before the time of hardware T&L, geometry was never stored in videocard memory, and as such, graphics APIs did not provide any storage buffers whatsoever. So it is unlikely that whatever storage classes you had designed for your rendering engine up that that point, that they would be adequate for hardware T&L without any modifications.


Maybe I'm not reading you right, or maybe you're not reading me right. But that's kinda the point of the 3-Tier model. From what you just said, the business tier of your mesh varies from version to version, but as long as you use a well defined generic structure for your business tier, porting between the versions should just be a matter of rewriting the business tier and hot-plugging them as needed.


I guess not, since I'm not arguing against the 3-tier model.
If you look at my rendering engines over the years, you'll find that I have been able to re-use a part of the code and design through various reincarnations, from software renderers to raytracers, OpenGL and various versions of D3D, in various languages, from Java to C++ to asm.
But there are also parts that need to be completely reimplemented.

When you start talking about "stored (or transmitted) data" that's a key that you are needing a persistence tier. Say for example you wanted to write a chat server. The chat server would essentially handle the inputs the same way, it would interface with your system logger or stdout in the same way, but say that a change occurs. Say that some of your users have odd firewall configurations and they can't send certain types of packets. The business tier isn't just a "storage container" it acts as an abstract translation tier between whatever input/output medium you are using. So you could have you clients chatting over HTTP, SSH, FTP, or even custom protocols but the business tier wouldn't care because it's still receiving the same data because the persistence tier would translate as needed.


I think what you're missing here is that the API already acts as a persistence tier. You are not programming directly on the hardware. The API is already an abstraction of the hardware, and as such it already forces you to store data inside its native objects in a certain way, and have them communicate together in a certain way.
The problem here is that different APIs solve the same problems in a different way. So you need to know how the API works before you can make a good design around it. That was my point. I mean, if you learnt a basic OpenGL engine design at school, and you are required to design a D3D engine at work... do NOT make the mistake of just blindly taking the OpenGL 'textbook' design, and trying to retrofit D3D into it, because it will not work.
This is what I see happening too often. People try to use 'textbook' solutions to problems that may appear to be similar, but the similarity is too superficial.


So I guess what I'm not understanding here is why, given that you create the vertex descriptor as a separate model whose presentation tier is connected to the business persistence tier of the mesh model, you can't just update the persistence tier of the mesh model to accommodate the changes made to the internal format of the vertex descriptor.


Well, I don't think I can quite explain that any more clearly than I already have...
I'll just reiterate:
- In D3D9, you can create a vertex descriptor as a 'standalone' object.
- In D3D10, you can only create an input layout object during shader compilation.
In other words: in D3D9 I can create the object at any time I require it. In D3D10 I *must* have a shader first, and compile it. This creates a completely different use case.

Yeah, maybe it's my lack of game development knowledge, but is there a reason that you can't just call down to the shader from the vertex model's business view when the mesh tries to read in the vertex model? This would just be a small change to the vertex model from what I'm reading.


Don't worry about it, it was an example, not a problem I haven't solved yet, let alone a problem that I don't know how to solve.
Ah, so it does use Microsoft's 3-tier model. It might be possible, just a shot in the dark here, that D3D9 wasn't built using the 3-tier design style which is why Microsoft did a floor up redesign.


Well, yes and no.
Since it's just a hardware abstraction layer, it doesn't actually 'do' anything, so there's some persistence-related objects, and a smidge of business-related objects, but not really anything else.
The actual presentation-tier will come from your application, along with the actual business logic and its own persistence-tier, which will be making use of D3D's more primitive objects.
The thing is just that the way you implement these tiers will depend on what API you use.
To get back to software rendering/software T&L... it was not unusual to read back the geometry directly, and perform operations on it. But with hardware T&L, it is very expensive to read back data from buffer objects, so generally they are created as write-only or even immutable objects.
This means that you must introduce two different types of storage: immutable and dynamic (making everything dynamic is not efficient, nor is having local copies of everything and recreating immutable objects on modification).

Likewise, with fixed function shading, it was relatively easy to change the shading by toggling some render states. With shaders, you have to change the entire shaders around.

Again, the point of the 3-tier design is to make individual solutions to each of these differences in a way that they interconnect together in a seamless fashion which doesn't disturb the other models in the architecture.


Sure, but that's not my point. My point is that there are plenty examples where you really can't anticipate on the changes from one API to the next. Yes, in the end they're all still rendering triangles... but sometimes you need a completely different approach to a certain problem. Things may move from one part of the pipeline to another... etc.

So yes, in theory the 3-tier approach is nice.. but in practice you never know exactly how far to abstract your design.
As you've said yourself, sometimes you WILL need to change your design around.
Posted on 2011-07-24 13:56:40 by Scali
Sure you can abstract all properties of the underlying rendering api into an abstract container for arbitrary renderers, Ogre is a good clean example of a DX9, DX10, DX11, OGL renderer, with is pluggable renderer layer, and handling of matrix conversions silently where applicable.
Posted on 2011-07-25 09:00:56 by Homer

Sure you can abstract all properties of the underlying rendering api into an abstract container for arbitrary renderers, Ogre is a good clean example of a DX9, DX10, DX11, OGL renderer, with is pluggable renderer layer, and handling of matrix conversions silently where applicable.


Sure, it's possible, that's pretty obvious, but my point is that you'd have to be familiar with all APIs before you can design a proper abstraction layer. You can't make the design first and THEN look at the rendering backends. You'd paint yourself into a corner too much.

Ogre doesn't fully abstract though, you still need to code your shaders in API-specific form. It's possible to do that too, but I've never seen anyone actually do that. Then again, it's more trouble than it's worth, really.
Anyway, that's getting way off the original topic.
Posted on 2011-07-25 09:37:16 by Scali
The latest version of Ogre (see SVN) has its own shader compiler, its own shader language(!), it does indeed abstract shaders to that level.
I haven't looked at it closely, learning yet another shader language doesn't really do much for me.
Posted on 2011-07-26 03:10:20 by Homer

The latest version of Ogre (see SVN) has its own shader compiler, its own shader language(!), it does indeed abstract shaders to that level.
I haven't looked at it closely, learning yet another shader language doesn't really do much for me.


Yes, I had thought about doing something like that myself...
First time was when I was switching from fixed-function to shaders... You could create an assembly-like scripting language to describe the texture stage states for fixedfunction, the pixel processing was technically already a simple assembly language with a handful of instructions and a few registers.

Second time was when I was considering to merge my OpenGL and D3D codebases.

But both times I concluded that it was more trouble than it's worth, really.
For something like Ogre it makes more sense, as it is middleware. It's supposed to abstract away all the hairy implementation details for the end-users.
My experiences with Ogre at work weren't all that good, partly because of these shader problems. The guys who wrote the shaders, started with GLSL. Then when we ran into problems with testing, because OpenGL drivers tend to suck (they don't all support the same version of GLSL, and some don't support GLSL at all)... they advised to switch to D3D9-mode. Except that not all D3D shaders were up-to-date, so it still didn't work well.
So then I thought 'hmmm, we are experiencing exactly the sort of problems that Ogre is supposed to solve'. If you're using API-independent middleware, you don't want to have to worry about shader details.

At least within D3D itself, Microsoft has handled it quite nicely. The latest D3D compiler can still generate D3D9 shaders. So as long as you stick to SM3.0 functionality and lower, you can use the same shader code for D3D9, 10 and 11.
With OpenGL things are not that simple. The functionality-level of your hardware is not necessarily exposed fully in GLSL. Eg, I have a DX10-capable (SM4.0) Intel IGP in my laptop, but GLSL is only at OpenGL 2.1 spec, which doesn't even include a lot of features supported in SM2.0/SM3.0.
And the DX9-capable Intel IGPs with SM2.0, they don't have GLSL support at all, only vertex/fragment program assembly language. I wonder if Ogre will try to solve that as well. Some kind of GLSL-to-assembly compiler would be nice. Microsoft did the same for D3D9, because even SM1.x can be programmed with HLSL.
Speaking of which, SM1.x was never supported at all, in a standard OpenGL extension.
Posted on 2011-07-26 03:51:30 by Scali
I updated my blog, since I found this blog: http://myossdevblog.blogspot.com/2009/03/premature-generalization-is-root-of-all.html
It pretty much describes the same thing as I did. So I put in a link to this blog as well.

Edit: Getting back to the discussion with Synfire earlier... I guess the thing is: what I described is a problem I encountered during implementation, not during design (it seems this is the crucial bit of information that we both didn't quite see during the discussion). As the above blog also points out: you can't understand the full domain and all of its problems beforehand.
So regardless of whether you use a 3-tier approach or whatever other design methodologies: you can't account for everything, you're not psychic.

However, since my design methodology takes into account that I indeed am not psychic, I just build a prototype implementation first, and see what problems I run into, and THEN I build my design around that. That's the key difference: a lot of people think they can build the design first, and THEN they run into problems.
Posted on 2011-07-26 13:04:16 by Scali
Oh, another interesting blog I just found, Joel on the Law of Leaky Abstractions: http://www.joelonsoftware.com/articles/LeakyAbstractions.html

I guess Ogre and the GLSL/HLSL issues are an example of such a leaky abstraction. You're not supposed to care about the underlying API, but when shaders stop working, you have to get down to API-details to figure out what the problem is, so it leaks through the abstraction that Ogre tries to provide.
Posted on 2011-07-27 07:30:27 by Scali