david, 'lowlevel' coding will be here for quite a while yet - migrating to .net (if that happens) is not going to happen overnight... will take years even after longhorn or whatever is released. And even then, the OS will still be able to run legacy code. Don't worry =)
Posted on 2003-12-01 16:34:57 by f0dder
david,

Never loose sight that operating systems come and go and what was the flavour of the moment is the history of the next. The low level coding you have learnt will always be useful to you as you will write beter high level code with this knowledge.

Keep in mind that DOS assembler written in 1980 is still readable 23 years later where in that space of time, a multitude of languages and operaing systems have come and gone. The BASIC built into an 8088 PC can still be read by any basic programmer and C written back that early is still readable by any C programmer of today.

I am an illiterate in Pascal so I cannot comment there. When you work on current stuff, you can usually assume that it will be different or obsolete in years to come but the lower level knowledge of data, algorithms, registers and fundamental logic operations will stick to you over time and different hardware and operating systems.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-01 18:39:28 by hutch--
For our friend there is a basic distinction that he needs to get the swing of, its timescale. In the computing world you have what has been, what is and what may happen in the future.

When you talk about stuff that is already known, you are handling the past to the present, when you talk about what will be you are uttering what is usually called vapourware. Aspiration is not a substitute for what is aready known and while I am a fan of original ideas and non-standard thinking to find newer and better solutions to existing problems, it cannot be done in a vacuum based on imagination alone.

The notion of scale also needs to be understood, if a kid want to watch a skyrocket, he has no reason to enlist NASA to bring back Saturn 5 rockets to do the job. When you talk of high end computer hardware, whether its SGI, NEC, IBM or others who build dedicated image manipulation systems, you are talking about a scaling difference to personal computers that is even larger than the skyrocket to Saturn 5 distinction.

Lets face it, does a kid playing a directX game on his PC need to have a pixel production rate capable of updating every computer screen on the planet in real time. Its much the same problem as a kid with a download problem being connected to an OC192 and wondering why his PC cannot handle the 1 gig/sec transfer rate.

What I am guilty of pointing out is something that everyone and their dog already knows, x86 architecture is ancient, circa 1978 and on the basis of that knowledge a very large number of people have continued to work to unplug bottlenecks inherant in the architecture.

Perhaps the mistake I have made in dealing with our friend is to assume that he shared the level of common knowledge that most assembler programmers are already familiar with and that is the limitations of this ancient architecture.

Appealing to what "could" happen in the future as a means of plugging up shortfalls at the present is a well known problem that is called "contrary to fact" or "counterfactual" conditionals.

Statement like "x86 could become the base for the next supercomputer" is like saying "Bill Gates could become the next chairman of the Free Software Foundation" or "George Bush could join the Taliban after the next election".

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-01 19:01:52 by hutch--
hutch--,

Once again you are making some nice escapes, but let's focus on what really happened here...
You have made some statements, somewhere at the start of this thread... I have added them to my signature, so you can see what we're really talking about.

Basically all I have done is this: I asked you to clarify the statements you've made. I also supplied some information that make your statements less likely.
So far you have basically been unable to give any technical arguments whatsoever, considering x86 or DirectX's technical limitations. Since these are your own statements, you have not exactly made yourself convincing.

Now, you can talk about SGI systems all you like, and patronize people with non-info and such, as you have been doing the entire thread, but you have never actually clarified your statements. Instead, you constantly tried to turn things around... Trying to portray me as an x86-lover who thinks x86 is the perfect solution for everything... Fact is, I have never said that... All I have done is give examples of where x86 is used, on various scales. You cannot know my opinion on x86, since I don't think I have actually given it.

Just look at what you are doing:

Statement like "x86 could become the base for the next supercomputer" is like saying "Bill Gates could become the next chairman of the Free Software Foundation" or "George Bush could join the Taliban after the next election".


Firstly, nobody claimed that x86 could become the base for the next supercomputer, instead... YOU claimed that x86 COULDN'T be the base for supercomputers. Secondly, since there is an x86-based supercomputer ACTUALLY AT NUMBER 4 IN THE TOP500 OF SUPERCOMPUTERS!!!, the entire statement is rubbish. There is PROOF that x86 can indeed make a very decent supercomputer. So why do you still even bother writing this nonsense?
It is beside the point.
The point is that you should list some technical limitations of x86, that explain why it couldn't be the base for supercomputers... And on top of that, you should explain how an x86-based supercomputer can still be in the 4th place on the supercomputer-list, regardless of the technical limitations you have listed.
Same for DirectX.

So really, everyone already thinks that you don't know what you are talking about anyway, look at HeLLoWorld's post for example. That's what you get when you don't back up your own statements.
You cannot win this discussion anyway, since you're basically discussing against yourself. I just kept throwing your own statements into your face.

So I am tired of all this nonsense. Basically you have 2 choices:
1) You admit that you cannot provide arguments to the statements you've made, and give up.
2) You surprise us all and actually come up with proper technical arguments to the statements you've made.

Let me give you an example.
This is wrong:
"Car A is red. Car B is blue. Car A is faster than car B."

This is right:
"Car A is lighter than car B and has a stronger engine and better roadholding. Car A is faster than car B."

That's what we're looking for here.

I'm not going to get into your nonsense posts anymore, I'll just remind you of what your options are.
Posted on 2003-12-02 02:05:29 by Bruce-li

by Donkey:
To this day the GDI makes almost no use of even the simplest hardware acceleration.

yes it does ;)
I had posted my research on it in the Game forum. I'm proud that bitblit gives me (and my customers) 180 fps of 800x800 bitmap at 32-bit screen 1024x768, and meanwhile the cpu stays at less than 8% usage.
Posted on 2003-12-02 05:24:05 by Ultrano
Ah,

The limped logic of a hick who has yet to see that the debate he started is all over bar the recriminations.

I verey rarely ever quote myself but our friend has yet to learn what quoting in context is about.

There are a number of things here to address. OpenGL was originally developed on Silicon Graphics hardware which in the graphics area is some powers more powerful than x86 pc architecture. Cross platform code is very hardware dependent and while DX may look good on x86 hardware, translate it to another hardware platform and it will have to compete with far superior image manipulation software.

I have consistently sold the view that there has been, is and will be better image manipulation hardware and software than you can handle on an x86.

Criterion is historical, current and what is in the pipeline in the big stuff and x86 and directX under Windows does not figure in any of them.

With your own list you could not name any of the top supercomputers that ran Windows and directX. Noting that the benchmark was not specifically image manipulation based which whether you understand it or not is directly relevant to the discussion, its fair to say that x86, Windows and directX are simply not in the class of big iron when it comes to high end image manipulation.

i see your response to historical and current limitations in x86 hadware as a sign of ignorance.

The position you have persisted in arguing is one that is well demonstrated by the fishbowl argument. A fish swimming in its own fishbowl with the idea that it is master of the universe well suits the view that you have tried to sell. The problem is that there is a big wide wonderful world out there that the fish has never experienced so the fish dos not really have much to say about it as it does not know about it.

However hard you try, you cannot strap together enough current x86 processors to compete with the big league in image manipulation. This is like strapping together a large number of fireworks skyrockets, they will never match a Saturn 5.

Explain to us how you pump 10000 x86 outputs through one of these fancy Taiwanese terrors you are so sure are upmarket ? This would really be worth hearing. When you learn some manners and humility you will have something to say instead of making smartarse wisecracks to someone you don't even know.

I will let you in on a little secret, there are guys in this forum who have forgotten more than you know about image manipuation among many other things but they just cannot be bothered with the whinging.

Have PHUN.
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-02 08:19:26 by hutch--
Hutch, perhaps you could clarify a few things?

1) What's in a supercomputer? Does a place in the top-whatever list qualify? What about google and pixar?

2) What does DX/GL have to do with supercomputers?

3) What does DX/GL have to do with non-realtime image rendering? (the "heavy" style image manipulation; say pixar, industrial light and magic, etc.)

4) What ties DX to x86 and windows? (hint: nothing - it runs on Itanium2 and G5)

5) What does the computer type have to do with 3D hardware acceleration? DX/GL is really about driving 3d graphics hardware - "taiwanese terrors", if you want.

6) Which architectural benefits does GL have over DX? I'm talking the API, not the platforms that can run either.

Would be interesting if you could actually answer these questions, instead of patronizing and uttering non-info?
Posted on 2003-12-02 08:43:18 by f0dder
Ah, hutch-- decides to again evade the discussion, and throw on some extra arrogance...

"Cross platform code is very hardware dependent" is ofcourse the most obvious piece of crap of this entire thread. It is a contradiction in terms of the purest category. Namely, cross-platform implies that there are different platforms, aka different types of hardware. So to say that cross-platform code would depend on the hardware, is in direct conflict with what cross-platform itself means.

i see your response to historical and current limitations in x86 hadware as a sign of ignorance.


Well, if even 'experienced asm programmers' such as yourself can't list them, how could I possibly know?

And, while you have again reiterated that there are differences between Windows/x86/DirectX/OpenGL/SGI/supercomputers, whatever, you fail to mention any of them, or the effect they would have on performance.
We answered your questions, try answering ours. The ones that f0dder listed seem to cover it nicely...
Oh, and by the way, the only Taiwanese GPU designer is XGI, as far as I know... ATi and Matrox are Canadian, and if I'm not mistaken, 3DLabs and NVIDIA are US.

Answer f0dders questions, or admit that you cannot back up your statements, regardless of whether they are right or not (actually, did I ever even say that they aren't?).
Posted on 2003-12-02 09:01:22 by Bruce-li

It is a contradiction in terms of the purest category.

More limpid logic. :grin: I wonder if you understand "contradiction", "terms", "purest" and "category".

When its obvious that you are dealing with someone with a high level background under the Windows platform, I wonder what the point is in repeating things like bus limitations, memory address range, instruction variation from model to model, high clock frequencies to cover up old architecture etc ...

If you don't understand such limitations, I suggest it is because you don't understand enough about the hardware. There are a lot of people in this forum who actually do know what these things mean so its not like its a big deal.

I don't in fact undertake to educate someone who enters a debate with smartarse wisecracks that reveal his own lack of very basic knowledge about hardware. What you have continued to do is be beguiled by hype about how leading edge DX is and while it works well on later hardware with a late enough video card, its powers off the pace to high end stuff, the type of stuf that you appear to have aspirations about.

If you don't comprehend the difference in scale from PCs to mainframes and other high end hardware, the go out and get an education if you have some need.

Like it or lump it, its a matter of fact that high end graphic manipulation systems don't use Microsoft Window, x86 and DX. It does not matter what might, ought, could or should be, its what the situation IS at the moment. Vapourware is no substitute for the hard evdence in output terms.

As before, have PHUN.
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-02 09:26:37 by hutch--
When its obvious that you are dealing with someone with a high level background under the Windows platform, I wonder what the point is in repeating things like bus limitations, memory address range, instruction variation from model to model, high clock frequencies to cover up old architecture etc ...


Excuse me, but your original statement was not about Windows, at all, only about x86 itself, read my signature, or go back to the original post... And the limitations you list here, apply mostly to the standard IBM-compatible PCs alone.. 32 bit versions of these, even.
Bus limitations... What bus? AGP? Doesn't count. It's not part of x86, as mentioned before (XBox, remember?).
Memory address range... Would that be the 32 bit thing that you tried to sell before? There are 64 bit x86s, so give it up already.
Instruction variation from model to model... Excuse me, but I fail to see how differences between models affect the maximum performance one can get from a specific model. So I don't see how it relates to performance of a specific x86 CPU. x86 also differs from other architectures. Does this make the other architectures slower aswell, for some reason?
High clock frequencies... Since when are those a disadvantage?

while it works well on later hardware with a late enough video card, its powers off the pace to high end stuff


I think your original statement was more that it would be impossible to implement DirectX on high end stuff, not whether standard PC hardware could measure up to the fastest machine in the world.
And I think we have asked why exactly DirectX cannot... and OpenGL can... What is the key difference?
Re-read f0dder's questions and see if you can answer them, it would help the discussion.

If you don't comprehend the difference in scale from PCs to mainframes and other high end hardware, the go out and get an education if you have some need


Again, completely beside the point. You never mentioned PCs as such. You only stated that it would be technically imposible to build mainframes from x86... for reasons you never mentioned... even though such machines are proven to exist...

Like it or lump it, its a matter of fact that high end graphic manipulation systems don't use Microsoft Window, x86 and DX. It does not matter what might, ought, could or should be, its what the situation IS at the moment. Vapourware is no substitute for the hard evdence in output terms.


Then again, the discussion was not about Windows+x86+DX vs the world.
The discussion was supposedly about the technical limitations of x86 and DirectX that would prohibit them from ever being used for high-end graphics systems *cough*Pixar*cough*. Limitations that you have yet to mention.
Re-read f0dder's questions and see if you can answer them, it would help the discussion.
Posted on 2003-12-02 09:38:33 by Bruce-li
farfetch farr farr
Posted on 2003-12-02 10:01:28 by Hiroshimator
Hmmmm,

It seens with the argument over that the recrimination have got into full swing.

With reference to well known limitations to x86 hardware and the techniques to get around it, refer to my earlier posts, I cannot be bothered retyping them.

With regard to you apparent lack of low level coding experience, feel free to get an education. Same for the apparent lack of architectural and scaling comprehension.

With your apparent swallowing of the hype around directX, at least learn why it was written and where it came from.

With video output from big iron, look up SGI, NEC and IBM, there will of course be others.

BTW, have PHUN.
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-02 18:26:13 by hutch--
Hmmmm,

There was somerthing that I should have tacked on to one of the previous posts. Among the reasons why Microsoft started developing the dirextSound/Video/whateverelse line of software was because the fundamental design of Windows excluded direct anything access which ruled out games like DOOM and others that worked in 32 bit DOS.

Now contrary to the view our friend has put forward about the platform and OS independence of the dirextX stuff, it was developed because the OS design has what has been later called a "Hardware Abstraction Layer" (HAL) and this made sound, image and other hardware related stuff really slow.

Direct(whatever) is actually a fudge by Microsoft to fix a shortfall in performance in their Windows Operating System. Under the previous system, you had to write an OS version dependent device driver to do any of these things where at least with the dirext(whatever) line you had improved access to hardware.

Now the problem our friend has with his view of platform independence with direct(whatever) is that if that system is not running Windows, they don't need Windows specific fudges to write directly to hardware.

Now here are a few more suggestions for direct(?) software.

directComPort
directPrinterPort
directAnyOtherPort
directDisk
directKeyboard
directMouse

All other suggestions welcome. :grin:

When you are posting in an assembler forum that is full of people who know how to write to video memory, port addresses, sound cards, hard disks etc .... you are not preaching to the faithful with fudges to get around the limitations of Windows.

As always, Have PHUN.
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-02 21:34:09 by hutch--
Perhaps you have missed the questions that f0dder posted, so I'll repeat them here:

Hutch, perhaps you could clarify a few things?

1) What's in a supercomputer? Does a place in the top-whatever list qualify? What about google and pixar?

2) What does DX/GL have to do with supercomputers?

3) What does DX/GL have to do with non-realtime image rendering? (the "heavy" style image manipulation; say pixar, industrial light and magic, etc.)

4) What ties DX to x86 and windows? (hint: nothing - it runs on Itanium2 and G5)

5) What does the computer type have to do with 3D hardware acceleration? DX/GL is really about driving 3d graphics hardware - "taiwanese terrors", if you want.

6) Which architectural benefits does GL have over DX? I'm talking the API, not the platforms that can run either.

Would be interesting if you could actually answer these questions, instead of patronizing and uttering non-info?

PS: I wonder how you think you know so much about my asm skills, or lack thereof. I don't believe we've even talked about asm at all. Besides, how is that relevant to the discussion anyway? Even people that don't know asm can use x86, DirectX, OpenGL and supercomputers, you know... Happens all the time. On the other hand, knowing asm does not guarantee knowledge of any of these things.

PPS: These questions are f0dder's, not mine... Perhaps you can answer them then, or does f0dder also lack knowledge of asm and is it therefore beneath you to actually answer questions and get into a technical discussion, so you just go calling names and other rude nonsense?

PPPS: You don't need DirectKeyboard or DirectMouse, there's DirectInput for that... Do I sense a certain lack of knowledge there? :)
Posted on 2003-12-03 03:12:18 by Bruce-li

Now contrary to the view our friend has put forward about the platform and OS independence of the dirextX stuff, it was developed because the OS design has what has been later called a "Hardware Abstraction Layer" (HAL) and this made sound, image and other hardware related stuff really slow.

It was written because there was no facility in the win32 API for high-performance multimedia stuff - the HAL doesn't have anything to do with this, actually you have the HAL to thank for stuff running as well as it does. Believe it or not, DX uses HAL too. Before DX, you had hardware accelerated (where the hardware supported it, of course) blits, lines, etc - what GDI has to offer. This just wasn't enough for high-performance gaming (something as trivial as doom could easily have been written to run reasonably well without any DX, though - and I belive there already has been written such a doom version.)
Oh, and are you saying that OpenGL is not a form of HAL? *giggle*


Direct(whatever) is actually a fudge by Microsoft to fix a shortfall in performance in their Windows Operating System. Under the previous system, you had to write an OS version dependent device driver to do any of these things where at least with the dirext(whatever) line you had improved access to hardware.

It's not a fudge, it's access to hardware accelerated functions in an orderly and controlled manner, to avoid programs written by know-better people to bring the entire OS down. Furthermore, DX means you don't have to write a device driver per hardware device you want per major OS version. Doesn't this sound like a lot less work for the programmers? Well-defined interfaces giving you decent control of your resources, and working across a wide range of devices...

It's true that x86 is ancient design and something cleaner would be nice - but that really has nothing to do with whether it's able to perform supercomputing tasks or not. At the end of the day, it all comes down to the cost of reaching a certain performance is - perhaps you have to cluster together a couple hundred times more x86 processors, but if the end result is cheaper - who cares? Something indicates that x86 can be cost beneficial, need I mention pixar, google, and the top-whatever supercomputer lists?

You still seem to be confusing DX/GL (realtime hardware accelerated rendering) with the "non-realtime rendering" (in lack of a better term) used to render movies and high-quality images. Neither DX nor GL is used here, as none of those APIs support the necessary features (raytracing, global illumination, very advanced materials and shading models). Both DX and GL can be used for previews in your applications, though, and there's no reason you can't use DX here instead of GL.

Your assumption that DX wouldn't do well on non-x86 hardware just because it was written on x86 is silly - at least since you're not saying GL should do bad on all non-SGI hardware. Infact, the DX API is less bound to the original architecture than GL... To do anything interesting with GL, you have to use vendor-specific (or those sort of standardized ARB extensions). Surprise surprise, to get at these extensions, you have to use platform-specific code. On windows, this is wglGetProcAddress. If you don't use vendor extensions, you're (at best) limited to using individual glVertex calls to get your geometry on screen - this can be sped up somewhat by using display lists (sorry if that's the wrong term - it's been a while. I can look up the exact term needed). Needless to say, all of this is inefficient in speed compared to vertex+index buffers... which GL does support (through ARB_* extensions iirc, so at least somewhat standardized), but still: you need platform-specific code for that. Oh, there's glDrawElements and friends, but they draw from main memory - the idea today is to transfer your geometry to the GPU and have that do the transform and lighting, instead of continously wasting CPU processing power and AGP (or PCI or PCI express or whatever bus you use) bandwidth.

Oh, while we're at it: GL lacks support for a number of things DX already has support for. Vertex/Pixel shaders (currently available as VERY vendor-specific, and being standardized... will only support vs/ps 2.0 and not 1.x as most cards have, and guess what? DX already supports both). You can't change screen resolution through GL (you have to rely on platform specific code), et cetera. I'm sure if I asked scali, he could mention an amount of other things GL lacks. Sure, DX is only available on windows (on x86 and itanium), on Mac OSX through MacDX (http://www.coderus.com/), and on linux with WineX. The API is better than GL, though. I already gave a number of reasons why - you just used fluffy empty words.

Also, while nobody has said 3D accelerated hardware should be used for final product rendering (see reasons given above), it can be used for previews, and some pretty impressive stuff. You might want to have a look at the following two URLs to see some examples:
http://www.daionet.gr.jp/~masa/rthdribl/
http://www.debevec.org/RNL/
The second shows that a standard middle-end Radeon 9700 card can be used for a real-time (and decent frames per second) approximation of something that took "a while" to render on a high-performance cluster.

PS: you might wants to ask discreet (3d studio max) and/or Alias|wavefront (maya) what they use to do their final rendering - betcha it's not going to be neither DX or GL. Hell, I'll eat my hat and take pictures of it if proven wrong. (DX/GL is obviously used for accelerating the working environment, though).

:rolleyes:
Posted on 2003-12-03 05:26:39 by f0dder
PS: http://www.imada.sdu.dk built a cluster of 512 2.0ghz P4 machines, giving a nice 2 teraflops. But perhaps that's not good enough to qualify as a supercomputer. For those interested, they have a nice article about the P4 thermal throttling here: http://www.imada.sdu.dk/~hma/intel/ .


PPS: if you couldn't find it, the top500 supercomputer list is at http://www.top500.org/list/2003/11/ . Check entries 4, 6 and 7. Would be interesting to know the full price of the systems.
Posted on 2003-12-03 05:41:09 by f0dder
Bruce,

As a matter of fact I ignored f0dders questions as the argument is dead in the water. Without a Windows operating system, direct(anything) to defeat the lack of direct hardware access in Windows is useless. Put simply you can forget directX on anything that does NOT run Windows.

Now with your 1st postscript, i can in fact only take what you say as I know nothing else about you but if you keep saying you know nothing about the limitations of x86 hardware and the stuff made to use with it, then either you are not speaking truthfully or you simply don't understand the technical data.


PPPS: You don't need DirectKeyboard or DirectMouse, there's DirectInput for that... Do I sense a certain lack of knowledge there?

Probably, I will take your word for it as it is a member of the class of direct(anything) to get around the HAL.

Now to come back to the original assertion I made, directX is limited to Windows based systems and that arbitrarily excludes it from high end graphics. OpenGL can be run on high end graphics boxes so it is a superior graphics manipulation system for high performance output. Further, x86 for being useful and cheap hardware in many uswer based applications is a long way off the pace it terms of high end hardware because its limitations are well known by among many others, its inventors.

Now you may succeed in strapping a whole pile of them together if you have the dedicated hardware to do it but you will not achieve a high end graphics box for the effort. There is a bit more to dedicated hardware than bundling a mass of processors together.

I mean seriously, how many 8088 processors would it take to build a super computer ? Same question with anything inbetween that and the current Pentiums. What you have to address is how to stream the output of so many processors to run real time image manipulation. The big boys do it with expensive dedicated hardware.

Now there is another factor again that you both have missed. A cursory glance at a board of 10 yeara ago showed that they were more or less a full of chips, look at one 5 years old and they had a lot less. Current ones pack a mountain more functionality in even smaller surface mounted packages.

Over time processors have become far more powerful as is consistent with the change in technology but there is another factor again that is shaping the current batch of computers and that is the worldwide downturn in IT. In these economic conditions, processor development slows down because the demand cannot pay for it. Look at the corporate return of both Intel and AMD over the last couple of years to comprehend that.

The changes in the pipeline that are being held up because of the downturn are things like better wafer technology that 25 year old silicon, better track technology with lower voltages, far higher speed memory and a mountain of other things.

Strapping lots of cheapies together does give more computing power but not in the scale of later smarter hardware. This where the next generation of super computers will come from, not regurgitating 25 year old architecture.

Do have PHUN. :tongue:

http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-03 08:14:25 by hutch--
As a matter of fact I ignored f0dders questions as the argument is dead in the water. Without a Windows operating system, direct(anything) to defeat the lack of direct hardware access in Windows is useless. Put simply you can forget directX on anything that does NOT run Windows.


Firstly, I don't think your original point said anything about whether the system should run Windows or not. It merely said something like "translate it to another hardware platform ". You never said the hardware platform could not run Windows.
Secondly, you are ignoring f0dder's post that pointed out without a doubt that DirectX was available on Apple Macintosh... Which as you know does NOT run Windows.
Thirdly, how is that different with OpenGL? Is OpenGL impossible on anything other than SGI aswell? Why, or why not?

Now with your 1st postscript, i can in fact only take what you say as I know nothing else about you but if you keep saying you know nothing about the limitations of x86 hardware and the stuff made to use with it, then either you are not speaking truthfully or you simply don't understand the technical data.


You constantly keep reversing the statements. I don't think I actually said that I don't know anything about the limitations... You just implied it a few times (which is rather rude anyway). I only asked you to name and explain these limitations, which you still have not, by the way.

Now to come back to the original assertion I made, directX is limited to Windows based systems and that arbitrarily excludes it from high end graphics. OpenGL can be run on high end graphics boxes so it is a superior graphics manipulation system for high performance output. Further, x86 for being useful and cheap hardware in many uswer based applications is a long way off the pace it terms of high end hardware because its limitations are well known by among many others, its inventors.


Firstly, as you could see, Windows 2000 can also run on powerful multi-CPU Itanium2 systems (like the ones SGI produces), and it supports clustering, so "Windows based boxes" can also be high-end, and are by no means limited to x86 based PCs, as you seem to imply constantly.
Secondly I think you originally did not state that DirectX was limited to Windows (which it isn't, again MacDX), but rather that DirectX had architectural limits that would make it impossible to run on anything but Windows. Yet, you never mentioned these limits when I asked about them.
Thirdly, if you say x86 cannot be used for high end hardware, you still have to explain why eg Pixar and Google use them, and why there are so many x86-based systems in the top500 list of supercomputers.

Now you may succeed in strapping a whole pile of them together if you have the dedicated hardware to do it but you will not achieve a high end graphics box for the effort. There is a bit more to dedicated hardware than bundling a mass of processors together.


One word: Pixar

I mean seriously, how many 8088 processors would it take to build a super computer ? Same question with anything inbetween that and the current Pentiums. What you have to address is how to stream the output of so many processors to run real time image manipulation. The big boys do it with expensive dedicated hardware.


If there is a technical difference between how x86 outputs its data and any other processor used in these super computers, why have you not explained it yet?
I see no difference between x86-based supercomputers and non-x86-based supercomputers. Both use a large number of CPUs to get their performance. Where exactly is the difference? The SGI-systems that you pointed out, were based on off-the-rack Itanium2 processors... Could it possibly be that the 'dedicated hardware' as you call it, is not in the CPU itself, but rather on the mainboard? And could it possibly be that this 'dedicated hardware' could also be designed to handle x86 CPUs? Or is there a specific technical problem with x86 that makes this 'dedicated hardware' impossible? If so, what is it?

So really, instead of posting endless unrelated ramblings, why don't you support your original points instead?
Or are you ready to admit that your original points were indeed wrong, since you never actually backed them up, but constantly shifted to slightly other points?

And still, I think f0dder's questions would greatly help this discussion, at least, the original form of this discussion, so I'll repeat the questions here, should you want to answer them now.
Otherwise, you might aswell give up. You are only making a fool out of yourself, with your silly insults and arguments of things beside the point.

Hutch, perhaps you could clarify a few things?

1) What's in a supercomputer? Does a place in the top-whatever list qualify? What about google and pixar?

2) What does DX/GL have to do with supercomputers?

3) What does DX/GL have to do with non-realtime image rendering? (the "heavy" style image manipulation; say pixar, industrial light and magic, etc.)

4) What ties DX to x86 and windows? (hint: nothing - it runs on Itanium2 and G5)

5) What does the computer type have to do with 3D hardware acceleration? DX/GL is really about driving 3d graphics hardware - "taiwanese terrors", if you want.

6) Which architectural benefits does GL have over DX? I'm talking the API, not the platforms that can run either.

Would be interesting if you could actually answer these questions, instead of patronizing and uttering non-info?
Posted on 2003-12-03 08:49:37 by Bruce-li
Same response as before, read my previous postings for the technical data but the argument you started is over. If you have not spoken truthfully about your own knowledge, finally thats your problem, not mine.

Same old stuff, direct(whatever) was developed to get around limitations of Microsoft OS design. Name me a high end supercomputer from your own list the is running the combinations of Windows, x86 and directX. It actually does not matter if it has been ported to a MAC or Sega or anything else, most vendors want to cash in on the available games, neither more or less.

Where high end graphics boxes do run OpenGL and don't run the Windows/directX combination, it is fair to say that OpenGL is a superior technology for high end applications.

Are all these recriminations PHUN ?
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-03 09:07:08 by hutch--
Same response as before, read my previous postings for the technical data but the argument you started is over.


Firstly, you never posted any proper technical data that could undisputably support your statements. You just hacked at the previous generation of 32 bit x86 CPUs, and tried to use some PC-specific limitations as generic x86-limitations.
Secondly, the argument never even started, if you ask me.
I only asked you to clarify some of your statements. Instead I get pages full of rubbish and insults.
The only one you're arguing with is yourself, since you made those original statements that you are now trying to weasel your way out of. So if you insist that the argument is over, then your original statements are automatically invalidated, since they were never argumented correctly.

Same old stuff, direct(whatever) was developed to get around limitations of Microsoft OS design.


Then why can you not name a single one of these limitations, and explain how OpenGL is different in that respect? Does OpenGL not get around the same limitations? At least, in the Windows-version? How is this different from OpenGL on other platforms exactly?
Hum this is starting to sound a lot like f0dder's questions again. Perhaps you should reconsider answering them.

It actually does not matter if it has been ported to a MAC or Sega or anything else, most vendors want to cash in on the available games, neither more or less.


The motive is not important. You claimed that there were technical reasons why it would be impossible to implement DirectX on anything other than Windows + x86. These non-x86, non-Windows implementations prove clearly enough that it is not impossible, right?
So what are you trying to say exactly?

Where high end graphics boxes do run OpenGL and don't run the Windows/directX combination, it is fair to say that OpenGL is a superior technology for high end applications.


It is not actually. Recall the 100 mbit vs the 1 gbit example I gave a few posts ago.
Besides, if it is superior technology, then surely you can give some technical arguments why OpenGL is in fact superior for high end applications (what exactly do you mean with high end anyway? Another one of those questions that you never bothered to answer, but would clearly help the discussion... Allow me to point you back to f0dder's questions).
This statement of yours makes no more sense than the following: "Japan is a superior country for running supercomputers, because the fastest supercomputer is located there."
Then you go on to make even more bizzare claims: "If you would move the Earth Simulation Center to another country, for example Germany, it will no longer be the fastest supercomputer in the world, because it's not in Japan."
Posted on 2003-12-03 09:20:17 by Bruce-li