Humm... you're still confusing a few things hutch, but nevermind that.

While I haven't looked too much at the lower details of COM (that is, the marshalling and stuff - I'm rather familiar with vtables etc), I wouldn't say it's a crappy design. It doesn't allow the same level of OOP as eg C++ (multiple inheritance etc), but it seems rather reasonable. For things like DirectX, the main advantage is you have access to all versions of DirectX even with the lateste version installed - which makes me wonder at your "stability being subject to Microsoft changing it every week or so". It takes quite a bit longer than a few weeks for changes to DX to appear, and even when they do, you still have access to v5,7,8 even with 9.0b installed. That's quite a higher degree of compatibility than you see with linux' shared library hell ^_^. Furthermore, the COM layer isn't really slowing things down for DirectX - it's not like you're outputting individual vertices.

As far as clustering goes, even the high-end hardware does that. Onyx4 uses multiple video chips in parallel. The top-500 supercomputer list consists of nothing but clusters, also for the non-x86 chips. So clustering itself is clearly not a bad thing.

You also still seem to confuse realtime visualization (onyx4, HP sv7) with "offline" (movie-style) rendering, where you can be pretty sure neither OpenGl nor DirectX (nor GPU hardware acceleration of any kind) is used. Also, you seem to be only considering 'supercomputers' used for graphics work - of course there's other SC's available, and (according to top500) x86 does well here. Just like it must be doing well for non-realtime rendering, otherwise I doubt pixar would have chosen a x86 cluster.

I don't really get the whole "taiwanese terror" thingy. Onyx4 and sv7 use hardware acceleration chips - onyx4 seems to be using proprietary systems, while sv7 uses stuff from nvidia. Both are 'big systems'. So what is particularly 'terror' about the sv7?

It's easy enough to try to get your point across by mixing and matching a lot of more or less unrelated things and taking things out of context. Patronizing hutch, hot-headed bruce, and nobody else (recently) adding anything of value.

*sigh*
Posted on 2003-12-10 19:11:45 by f0dder
f0dder,

I will respond to you as you seem reasonable in your approach. Apart from our friend's problems, the distinction between directX and OpenGL is one of competing software systems to perform overlapping tasks as both are methods to interface with video hardware.

Now I am sure you understand how vtables hang together so the idea of multiple level of indirection should make sense. Microsoft have chosen to implement video conrol through a COM interface which is by its nature, a dedicated Windows technology that doesa add extra code to get to the functions.

OpenGL is ported to many different systems so it cannot be a proprietry interface like direcX but an open standard as none of the other systems are bound to a COM interface. Now for any given hardware where they both address the same task which means assentially x86 running Windows, OpenGL has been around longer and is more stable where directX is selling its virtues as being leading edge.

I suggest that dirext(?) is still a fudge to overcome a fundamenal design system that Microsoft took on in the middle 90s when they brought in a pile of VAX hacks to design a system that apparently could not do by themselves. Virtual hardware sounds nice and makes interconnectivity fun but it does not perform and this was evident by the performance of GDI back then on what was current hardware.

DirectX is a big fudge by Microsoft when the virtual hardware concept failed to perform and the games guys were using 32 bit versions of DOS to run the DOOM era of games.

Now having seen directX games in Windows, it is in fact a lot better system than GDI for gaming and its performance is improving as the hardware gets better but this does not make it a model for other hardware and operating systems.

The linux guys would die laughing at the idea that they need to implement a COM interface in linux so they can emulate Windows when they already have the low level access that Windows did not allow without directX.

Much the same comment for the much bigger systems that run unix or their own os design. I think I can fairly say that Microsoft function design is regularly crappy, inconsistent, at times unreliable and poor in terms of backwards compatibility and this is without the lousy documentation.

People who write code for Windows are used to it and Microsoft don't give a stuff as they have market control of the x86 PC market but many know better software design that is compatible across different hadware platforms.

Now to the hardware level of video chips and video memory, the concept of writing high demand software directly into video hardware makes sense along with having enough high speed memory to store and reuse code without fetching back to the main processor. The options seem to be writing OpenGL, directX or lower level capacity directly into the video chip.

I would favour writing lower level capacity as it is more flexible for any higher level system and it makes designing dedicated systems for image work less OS dependent. Now if the PC market end of video cards end up primarily directX, it is still hardware access and the linux guys and any one else who uses x86 PC without running Windows can still access the capacity directly in hardware so I doubt that it will force them to install Windows to run video code.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-10 20:55:39 by hutch--

I will respond to you as you seem reasonable in your approach.

Thanks. I'm not doing this to troll or to bash you, but because I find the topics interesting.

Humm... I haven't seen COM anywhere but windows, but can't say I have looked. There's nothing in it's design that's tied to neither windows nor x86, though (obviously not x86-bound as it has to run on any windows platform). Other OSes have similar technologies - like the CORBA stuff in *u*x.

It's not too high-level for a high-performance video interface either, IMHO, as you tend to use "high-level" calls to the video hardware to get any reasonable performance - you're not outputtin individual pixels or vertices (just like nobody with any form of self respect have coded bitblt as a for-x/for-y loop doing PutPixel calls :)). I'm not saying COM is the holy grail by any means, just that it isn't really a bad thing.

OpenGL has been around longer, yes. But more stable? I haven't seen any stability issues with DirectX (apart from using beta drivers, but that has nothing to do with the API, nor even microsoft - it's the responsible of the video card vendor). It's true that there has been changes to DX, but you still have access to the old interfaces. OpenGL has had changes too, but this has happened through extensions, which are only slowly becoming standardized - without these extensions, a lot of interesting features aren't available.

Hum, DirectX a big fudge... I think you will find similar layers on any modern OS that has high-performance APIs available. OpenGL isn't that different from DX really - it's just higher level and less hardware control. Linux also has a form of (very rudimentary) HAL, I guess - at least some of the PCI drivers work on both x86 and other platforms. Abstractions are necessary under most real operating systems, even if you're working with rather tightly specified platforms - the only place you can really avoid HALs would be console programming.

Btw, doom can be implemented for win32 using GDI and WaveOut without too much bother - in terms of required OS support, it's very simple. You basically need a framebuffer, a PCM audio output, and some keyboard/mouse input. For fun, some historic info: the original DPMI32 DOS version of doom used a 16bit IPX driver, passed the address of this 16bit memory buffer to the DPMI exec on commandline, and constantly did 32<>16bit switching because of this. It worked fine, but was just as dirty as win9x :)


The linux guys would die laughing at the idea that they need to implement a COM interface in linux so they can emulate Windows when they already have the low level access that Windows did not allow without directX.

You don't need to do a full COM implementation to support DirectX - you can do a very thing wrapping layer. No need for marshalling etc anyway, I think it will take some years before you run a game on a remote box but have the video commands sent over a network :P. Besides, they don't really have the kind of access that DirectX offers, yet. They have OK performaing OpenGL drivers for a (very) few cards, and there's some initiative to do a proper accelerated kernel-mode graphics API, but it's still lagging behind.


Much the same comment for the much bigger systems that run unix or their own os design. I think I can fairly say that Microsoft function design is regularly crappy, inconsistent, at times unreliable and poor in terms of backwards compatibility and this is without the lousy documentation.

Have you ever worked on a unix system and had to rely on manpages? :). If you know your way around, things are fairly okay documented as long as you stick mostly to ANSI C. Other than that, you're referred to various pieces of source code, obsolete information, etc. There's some quirks here and there with win32, but as long as you stick to what MSDN says it works rather well, and you have a centralized source of programming information. I (fortunately) don't know how the switch from win16 to win32 was, but it has been painless for me to switch from 95 to 98se to win2k. Oh, I only have some experience with linux programming, the bigger (commercial) unices might have better documentation.


the concept of writing high demand software directly into video hardware makes sense along with having enough high speed memory to store and reuse code without fetching back to the main processor.

This sounds like what Transform & Ligthing + shaders are all about. You (generally try to) upload your geometry and texture to the GPU once, then throw transform matrices and lighting shaders (etc) at it. Both OpenGL and DX support both, unfortunately shader support will not be standardized in OpenGL before 2.0, and it will only support vs/ps2.0, not the (much more commonly available) 1.x version. Shaders are really nice because they can, realtime, approximate things that would have taken "a fair amount of time" to render previously. Not a substitute for "the real deal" (raytracing with more complex formulas etc), but nice for previews or realtime stuff.


I would favour writing lower level capacity as it is more flexible for any higher level system and it makes designing dedicated systems for image work less OS dependent.

The problem with this is that you'd tie yourself to a single vendor - and possibly even a single system from a single vendor. I don't think it's very necessary either, as you generally don't want to do too "low-level" commands - ie, you'd rather say "draw this array of 4000 vertices" instead of outputting 4000 singles vertices, as the first is much more speed-efficient.

I think the line between "PC video hardware" and "big iron video hardware" is becoming somewhat blurred. By lookin at Onyx4 and Sv7 (the only two examples posted), I got the impression that the individual chips used (proprietary SGI, I guess, and some professional NVidia chip) aren't too different, performance wise - not by looking at the raw numbers anyway. Feature-wise, the nv chip might even have more to offer? (at least the gaming nvidia cards offer shaderse - dunno about the proprietary SGI chip). The big between whatever supercomputer and the x86 PC systems, I guess would be that you're usually limited to a single chip on a single AGP card, instead of some more tightly integrates solution with a bunch of clustered graphics chips (like Onyx4 or Sv7 seem to be using).

Of course I can only look at the published specs and do a few (hopefully educated) guesses, as I have never had access to this kind of machinery. But it would seem that x86 _can_ be a reasonable choice for realtime visualization, otherwise I doubt HP would bother building these boxes. Not saying it's the best choice, as I haven't been able to find any pricetags - not that I searched extensively.

As for my original speculation idea thingamajig whatever, there's certainly policital and historical reasons why OpenGL is more widespread than DirectX - this should be obvious to both linux and MS zealots. I still haven't seen any purely technical reasons though, especially not as the DX API gives you a higher degree of control than OpenGL.
Posted on 2003-12-10 21:54:01 by f0dder
What I have defended is simple, big end of town run OpenGL, small end of town play with the rest.


No, you said the big end COULD not use DirectX or x86, because there were TECHNICAL limitations.
This is an entirely different statement. How stupid do you think we are?
Or how stupid are you? You can't read back and see what your original statement was?

Clustering with either processors or video chips does not solve the problem for our friend as the big end of town have been doing it longer and keep selling it to high end customers.


Erm... Why does clustering not help when the 'big end of town' also does it?
Does it not help them then? If not, why are they doing it?
Your logic is severely flawed, you know that?

What our friend has yet to learn is that his opinion does not change hard facts and these are that the top end of town does not run Windows, Windows size boxes and directX.


Even though Windows 2000 runs on both x86 and Itanium2, and is in the top500 of supercomputers, right.
Come up with something better, we've proven long ago that Windows powers supercomputers. Just because you cannot accept this doesn't make it go away. Grow up.

Does linux need a COM layer when it does its own hardware access ? I suggest the linux guys would not see it this way. Perhaps our friend is yet to grasp that a software system like directX does not run without an operating system and that operating system in Windows does not run on high end hardware.


Excuse me, but most linux guys don't even know what OOP is... Besides, linux guys don't even have a hardware driver system that would make something like DirectX possible at this time. OpenGL is just a hack by the grace of one or two hardware manufacturers that happen to build linux drivers. It's not part of linux in any way.
If you had actually used linux or OpenGL once, you would have known this.
And the COM-thing you say is VERY naive. As if an indirect call more or less will affect hardware access or performance in any way. God, how clueless.

Does clustering/strapping together/parallelling chips from NVIDIA make a PC high end hardware. Seems not as HP build middle to high end hardware that is NOT PC based with the HP sv1 our f3riend made reference to. There is no trickle down here, a HP sv1 is by no means a PC.


It's sv7, and no it's not a PC, but that's not the point, is it? The SGI systems aren't PCs either. The point is that the sv7 uses stock NVIDIA accelerators that can also be found in PCs. And it actually BEATS SGI on its own turf with it. THAT is the point. Tough, huh?
Clearly you have no clue about the scale of performance of the SGI graphics chips, else you would have known that a single one of theirs is no better than a single PC chip. And you would not have started this entire discussion, and been shown up as an ignorant fool. It's so obvious you know, give up.

The shift our friend is trying to pull is that you can call Windows / x86 / dirextX / taiwanese terrors high end but the obvious response is high end """ PC """ systems, not high end systems running OpenGL under linux.


What kind of crap is that?

The argument about components our friend has repeatedly tried to introduce does not work for him either, it really does not matter if there are some common components between PCs and big stuff, be it resistors, capacitors or chips used by both, the difference is SCALE, something our friend does not appear to be able to grasp.


No, what YOU don't appear to be able to grasp is this:
Big hardware clusters multiple processors to scale performance. While most of them use non-PC hardware, the idea works equally well with PC hardware. And whether it runs linux or Windows, or OpenGL or DirectX, is not really relevant. There are no big technical differences that would make one or the other impossible.
We have proof that google and Pixar use x86 systems on a large scale... We have proof that x86 systems are way up in the top500 supercomputer list, and we have proof that HP beats SGI on their own turf with standard NVIDIA graphics processors. So what is your point, really, other than that you have NO clue whatsoever about big hardware, or the possibilities of x86, Windows and DirectX?

Limitations in x86 hardware also seems past our friends grasp and the technical data seems to be too complicated for him but to reduce it down to the kiddie level for our friend, Intel's response to the limitations of x86 hardware is called the "Itanium(2)".


That's nice and all, but the cold hard facts, namely SpecCPU2000, clearly show that their x86 and their Itanium2 are not far apart in performance. One is slightly better in integer operations, the other in float operations.
So, I don't see any limitations there. If there are any, maybe it's time you should name them... Oh wait, that's right. You can't, because you don't know what you're talking about... We asked this for about 15 pages... Silly me. Why do you even respond still? You like to be shown up as an idiot or something?

Noting that the big end of town can afford either x86 or Itanium, their choice on a large scale to parallel Itanium processors demonstrates they agree with Intel here.


Wrong. There are too many x86 supercomputers to just ignore them, there are many more x86 supercomputers than Itanium2 aswell. At least inform yourself before you speak. Maybe you'll come off as less of an idiot then.
Or you won't speak at all, either way works for me.

Feel free to tell Intel they are wrong and that x86 is superior technology but do not hold your breath waiting for them to agree with you.


Someone should ban you for stuff like this. I never said x86 is superior technology. I don't deny that Itanium2 could be the future, and x86 will eventually disappear. But we live NOW, and NOW x86 is performing very well, and is very cheap, and is very popular with supercomputers.

Is our friend a troll ?


You're the troll here. You can't make any technical arguments whatsoever, you only talk the same crap over and over again, despite the evidence, and resort to personal attacks.
You should have been banned long ago.

The linux guys would die laughing at the idea that they need to implement a COM interface in linux so they can emulate Windows when they already have the low level access that Windows did not allow without directX.


Excuse me, but do you think OpenGL is like accessing hardware on a low level? Or COM for that matter?
No, both OpenGL and DirectX are a user-level interface for a driver. In fact, in Windows they interface with the SAME driver, and parts are shared between OpenGL and DirectX. COM is just nicer for a programmer than procedural interfaces. The programmer gets nicely encapsulated objects, and reference-counting, so he doesn't have to manage so much himself. It has NOTHING to do with hardware access, this is done in the driver, not in the API. Do you really know so little about the subject?

Much the same comment for the much bigger systems that run unix or their own os design. I think I can fairly say that Microsoft function design is regularly crappy, inconsistent, at times unreliable and poor in terms of backwards compatibility and this is without the lousy documentation.


You've never worked with linux, have you?
And instead of this useless troll, can't you give some concrete facts? Preferably about DirectX itself, it might help the discussion.

People who write code for Windows are used to it and Microsoft don't give a stuff as they have market control of the x86 PC market but many know better software design that is compatible across different hadware platforms.


Windows is also compatible across different hardware platforms... Besides, if this is still about supercomputers and large visualization systems, I find this whole portability talk rather naive... Do you really expect that people write portable software on a high-end system? No they don't. They write specific code, for their hardware... Even if they run linux, their code will never be portable to a standard linux PC, because they will use custom OpenGL extensions, or custom APIs for faster parallel CPU usage etc. You really have never seen such a large system up close, have you? Funny, it's my job, being in computer graphics and all. Forget it.

I would favour writing lower level capacity as it is more flexible for any higher level system and it makes designing dedicated systems for image work less OS dependent. Now if the PC market end of video cards end up primarily directX, it is still hardware access and the linux guys and any one else who uses x86 PC without running Windows can still access the capacity directly in hardware so I doubt that it will force them to install Windows to run video code.


This sounds extremely vague and nonsensical... Do you even know how to get a triangle on screen with either API?
In fact, why don't you prove it? Paste some of your Direct3D and OpenGL sourcecode please. Else I can no longer take you seriously regarding knowledge of computer graphics... You really don't seem to know the first thing about it.
Posted on 2003-12-11 04:58:33 by Bruce-li
thread closed

sorry for those who wanted a good discussion about this subject. Better luck next time :)
Posted on 2003-12-11 06:11:24 by Hiroshimator