I have tended to leave this thread alone as it turned into a pissing competition after our friend lost his argument about what performs off the shelf and what performs on the specialty end of the market.

Its easy to make reference to SGI boxes as they have been around for a long time and to find the current stuff, you only need to look up their web site at www.sgi.com to see what their current toys are.

If you knew your way around the IMB or NEC or a plethora of other sites you could keep up with their high end stff as well. I think most who have had the patience to read this thread get the idea that I don't give a FLYING PHUK about that latest video card cheapies from Taiwan or similar as they change on an almost daily basis in terms of what pops the most pixels, polygons etc ....

Our friend seemed to miss the point when I mentioned slightly tongue in cheek about video capacity that was powerful enough to control 3 dimensional holographic images in real time. While he may be able to produce game quality image data fast enough, he has missed the scale of processing power and image manipulation required to manage 3 dimensional images at high precision. This IS the territory of big iron at the moment and the scale I mentioned involved hundreds of thousands of these 3 D objects being manipulated in real time.

You could add "smellovision", localised surround sound for each object and individual monitors for the heart rate of each ancient warrior as well, apart from their blood pressure and resporatory rate but the general idea is that ANY given hardware capacity will probably be surpassed at some stage in the future as the demand for extra performance increases.

What our friend has still not connected is SCALE of operation and costs. x86 boxes have become a lot cheaper and a lot faster but so has the BIG end of town. A mainframe in the middle 60s was about the cost of a skyscraper, now its about the cost of the foyer in a modern city building and this is a bigger scale of cost reduction than has happened in the x86 PC market.

For under a million dollars US you can get more bang for your buck than kiddies playing around with PCs could imagine and this is an awful lot cheaper than it used to be many years ago. When our friend learns what SCALE is about, he will stop treading in doggy poo of his own making and subsequently putting that foot back in his mouth.

As usual, have PHUN. :tongue:

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-09 21:04:17 by hutch--
"Insane" amount? No, they just like to insure all possible input values map to correct output.


If you use 10 subpixel bits, you basically have more accuracy INSIDE each pixel, than you have pixels on the screen, in 1024x768 or lower. I'd call that insane, yes.
And you didn't get what I said. I said that no 3d card ever is one pixel off with the vertices.
They may have less precision, but it does not affect vertex placement, as Mirno thinks.
So, you should be talking to Mirno, not me.
I know how T&L pipelines and rasterizers work, it's my job.

DX is designed to exploit the reduced percision of the mainstream cheaper cards, whereas OpenGL is not. This is good or bad depending on your view point.


What kind of nonsense is that? How exactly does DX exploit this reduced precision then? And why would OpenGL not do the same?
I mean, they DO run on the same hardware, you know.
Posted on 2003-12-10 04:54:05 by Bruce-li
If you knew your way around the IMB or NEC or a plethora of other sites you could keep up with their high end stff as well. I think most who have had the patience to read this thread get the idea that I don't give a FLYING PHUK about that latest video card cheapies from Taiwan or similar as they change on an almost daily basis in terms of what pops the most pixels, polygons etc ....


But you haven't commented on the HP sv7 yet, which uses "that latest video card cheapies from Taiwan".
Why is your head in the sand? Is it an Australian thing?

Our friend seemed to miss the point when I mentioned slightly tongue in cheek about video capacity that was powerful enough to control 3 dimensional holographic images in real time. While he may be able to produce game quality image data fast enough, he has missed the scale of processing power and image manipulation required to manage 3 dimensional images at high precision. This IS the territory of big iron at the moment and the scale I mentioned involved hundreds of thousands of these 3 D objects being manipulated in real time.


Perhaps you would care to explain how 3d holographic graphics actually work then? I'm sure it has nothing to do with 2d rasterization and common 3d accelerators.

What our friend has still not connected is SCALE of operation and costs. x86 boxes have become a lot cheaper and a lot faster but so has the BIG end of town. A mainframe in the middle 60s was about the cost of a skyscraper, now its about the cost of the foyer in a modern city building and this is a bigger scale of cost reduction than has happened in the x86 PC market.


HP sv7, Pixar, your comments please.
Or have you still not connected the x86 and 'game' cards and the BIG end of town?

Comment on these issues, or be silent.
You just keep reiterating the same nonsense that you have before. Are you stupid? Or just blind?
PLEASE COMMENT ON THE HP SV7 SYSTEM AND THE PIXAR CLUSTER, WHICH BOTH USE 'PC' PARTS TO DRIVE THE CUTTING EDGE OF GRAPHICS.
That better?
Posted on 2003-12-10 04:59:37 by Bruce-li
It seems our friend has missed the point again, my reference to technology that is barely running at the moment, that is, computer controlled 3 dimensional holographic displays, was stated in the context of what may happen in the future so to put in terms simple enough for our friend, NO you cannot buy a x86 off the shelf card to do that yet.

The purpose of the comment was to demonstrate that video output bandwidth will never be high enough for the leading edge.

Now as far as the HP box you have mentioned, I in fact have nothing against HP stuff and still have a 1982 HP calculator that is still using its original batteries. What I would be more inclined to see is the numbers for it in the context of other big iron. I bothered to produce the specs for an off the shelf SGI box but I never did see the alternative being offered. When it comes to dedicated bigger hardware, we will have to start talking about NASA size computers and similar high end users.

Now where you have shifted the argument again is from PCs to PC parts as it appears that clustering PC x86 hardware by itself did not demonstrate what you were after.

Now here is an argument of the same type you are using. A 1980 z80 computer used capacitors in its circuit board and the fastest super computer in the world uses capacitors so the z80 is as fast as the fastest super computer in the world.

I will leave you to have PHUN skittering around this piece of genius.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-10 06:15:02 by hutch--
It seems our friend has missed the point again, my reference to technology that is barely running at the moment, that is, computer controlled 3 dimensional holographic displays, was stated in the context of what may happen in the future so to put in terms simple enough for our friend, NO you cannot buy a x86 off the shelf card to do that yet.


So what's your point? You cannot buy non-x86 cards off the shelf to do that yet either. What's an x86-card anyway? All current cards used in x86 are just standard AGP cards, which work equally well in Apples, HP Itanium2 workstations, and other things... The CPU architecture has nothing to do with it.

The purpose of the comment was to demonstrate that video output bandwidth will never be high enough for the leading edge.


Not really, since the technique for rendering such is completely different to what we currently know, I suppose.
And since you cannot explain how this technique works, it is rather hard to just guess at the bandwidth requirements for it, is it not?

What I would be more inclined to see is the numbers for it in the context of other big iron.


Read the PDF? It lists that it uses standard FX2000 chips, and it lists the performance of one of those chips.
Multiply by 32 to get the HP-equivalent of the best Onyx4 system, which also uses 32 graphics chips.
As you see, the Onyx4 gets stuck short of 80 gpix/s while the HP is over 100 gpix/s at that point (yes, each single 'game' chip in the HP is faster than each one that that Onyx4 uses).
And as you also see, there is no hard upper limit for the number of graphics chips in the HP system, unlike the Onyx4, which cannot handle more than 32. So the HP will scale way beyond the performance of the Onyx4, as long as you keep buying more graphics chips.
So yes, it's rather weak to just claim you didn't see the specs. They're right there on HP's site, in the PDF I linked to, for example. I think you meant you didn't WANT to see the specs.

Now where you have shifted the argument again is from PCs to PC parts as it appears that clustering PC x86 hardware by itself did not demonstrate what you were after.


Nice try, but that is only because you have STILL not commented on the Pixar cluster, which as you know is purely x86. That's probably why you cannot comment on it. It undermines everything you try to say.

Now here is an argument of the same type you are using. A 1980 z80 computer used capacitors in its circuit board and the fastest super computer in the world uses capacitors so the z80 is as fast as the fastest super computer in the world.


That looks like another one of your famous non-sequitur arguments, yes.

Give up, you don't know a thing about current graphics hardware and software, it seems. Things have changed in the last few years. Try to keep up with it.
Posted on 2003-12-10 06:34:06 by Bruce-li
Isn't this the most boring thread of all times?

:grin:
Posted on 2003-12-10 06:41:36 by Eternal Idol Birmingham
Isn't this the most boring thread of all times?


Yea, that's what you get with people like hutch-- who cannot hold a mature technical discussion.
They resort to ignoring facts and questions, and instead insult other people.
But for some reason, no moderator has done anything about this yet, even though it is against the forum rules.
Posted on 2003-12-10 06:43:44 by Bruce-li



Yea, that's what you get with people like hutch-- who cannot hold a mature technical discussion.
They resort to ignoring facts and questions, and instead insult other people.
But for some reason, no moderator has done anything about this yet, even though it is against the forum rules.


Maybe you are right... I've waited hutch to answer the fodder list but he never did it ...
Posted on 2003-12-10 06:46:01 by Eternal Idol Birmingham
Maybe you are right... I've waited hutch to answer the fodder list but he never did it ...


Ofcourse not, it's painfully obvious by now that he can't, because he doesn't know enough about it. So he constantly tries to shift the focus of the discussion to other areas.
Ofcourse I could be wrong, but in that case, I expect hutch--'s next post to contain good answers to all the questions after all.
If not, I must be right.
Posted on 2003-12-10 06:50:43 by Bruce-li

I expect hutch--'s next post to contain good answers to all the questions after all.


Yep, me too.
Posted on 2003-12-10 06:53:32 by Eternal Idol Birmingham
http://www.sgi.com/servers/altix/benchmarks.html

Get the PDF file on the link on this URL. Too many results to paste in. Sept 16 2003

http://www.sgi.com/features/2003/nov/nasa/index.html

Just weeks after attaining record levels of sustained performance and scalability on a 256-processor global shared-memory SGI? Altix? 3000 system, the team at NASA Ames doubled the size of its Altix? system-achieving 512 processors in a single image, by far the largest supercomputer ever to run on the Linux? operating system. (NASA announced its technical feat at the SC2003 supercomputing conference.) NASA's effort is part an intra-agency collaborative research program between NASA Ames, JPL and NASA's Goddard Space Flight Center to accelerate the science return for large-scale earth modeling problems.

http://www.hp.com/workstations/risc/visualization/overview.html

sv7-1.pdf

128 MB unified frame buffer
100 Million Triangles.sec per render node
3.2 Billion texels/sec per fill rate per render node
27.2 GB/sec memory bandwidth per render node

I cannot be bothered chasing up the specs again that I posted about 10 pages back but the technical assumption here is look at the numbers and start counting.

1 is before 2 and 2 is before 3 etc ....

Here are the specs from your own link.

Cards

* NVIDIA Quadro 2 Pro
* NVIDIA Quadro DCC
* NVIDIA Quadro 4 550XGL
* NVIDIA Quadro 4 980XGL
* NVIDIA QuadroFX 2000
* NVIDIA QuadroFX 1000
* ATI FireGL 8800
* ATI FireGL X1 128MB
* 3Dlabs Wildcat VP870

Test platform

* Mainboard: Intel Server Board SE7505VB2
* CPUs: 2 x Intel Xeon 2.4GHz (HyperThreading on, 4 logical processors in all)
* Hard drive: Fujitsu MPG 40GB
* RAM: 512MB DDR
* Monitor: ViewSonic P 817-E

All the tests were carried out under the Windows XP Professional, Service Pack 1, DirectX 9.0a, with all additions and drivers installed. Vertical sync, anti-aliasing and anisotropic filtering were forcedly disabled in the drivers.

Now the assumption is that because the NVIDIA Quadro 2 Pro performed better than the others in the list on PC hardware running windows, you can scale them by strapping enough of them together to make giant killers out of them. While I have no doubt that they are reasonable graphics cards for the flavour of the moment on a PC, demonstrating giant killer capacity on the basis of your assumption is at best tenuous.

Now the argument still breaks down to the people who DO use high end hardware are not choosing the type of stuff you are claiming is the giant killer hardware. It seems our friend is back to strapping a lot of chips together at the assumption level to compete with high end hardware.

Yawn but do have PHUN. :tongue:
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-10 09:33:24 by hutch--
You must be confused. The Altix is a server, not a visualization system such as the Onyx4 or the HP sv7.
So those benchmarks you posted are irrelevant. There is no mention of graphics performance whatsoever.

Now the assumption is that because the NVIDIA Quadro 2 Pro performed better than the others in the list on PC hardware running windows, you can scale them by strapping enough of them together to make giant killers out of them. While I have no doubt that they are reasonable graphics cards for the flavour of the moment on a PC, demonstrating giant killer capacity on the basis of your assumption is at best tenuous


Firstly, it's about the Quadra FX2000, not the Quadra 2 Pro. Secondly, it's not an assumption. It's what HP is actually doing in their sv7.
Thirdly, you seem not to have a clue.

Now the argument still breaks down to the people who DO use high end hardware are not choosing the type of stuff you are claiming is the giant killer hardware. It seems our friend is back to strapping a lot of chips together at the assumption level to compete with high end hardware.


I hate to break this to you, but the SGI systems are no different from the HP sv7 or the x86 cluster that Pixar uses. They all use multiple processors, 'strapped together' as you so technically put it, to scale the performance.
The only point is this: HP uses 'regular' graphics chips, also used in PCs in their solution, while SGI uses their custom solution. And the HP is surely able to compete with the 'high end' Onyx4 hardware. It even scales way beyond this one.
Which once again illustrates your utter cluelessness.

Speaking of cluelessness:

Hutch, perhaps you could clarify a few things?

1) What's in a supercomputer? Does a place in the top-whatever list qualify? What about google and pixar?

2) What does DX/GL have to do with supercomputers?

3) What does DX/GL have to do with non-realtime image rendering? (the "heavy" style image manipulation; say pixar, industrial light and magic, etc.)

4) What ties DX to x86 and windows? (hint: nothing - it runs on Itanium2 and G5)

5) What does the computer type have to do with 3D hardware acceleration? DX/GL is really about driving 3d graphics hardware - "taiwanese terrors", if you want.

6) Which architectural benefits does GL have over DX? I'm talking the API, not the platforms that can run either.

Would be interesting if you could actually answer these questions, instead of patronizing and uttering non-info?
Posted on 2003-12-10 09:57:09 by Bruce-li
The problem with your approach is that you continue to try and define the range of the argument to get the results you want and thats why I keep posting information that shows your assumptions are in the little league.

With the benchmarks between competing systems that I posted,

http://www.sgi.com/servers/altix/benchmarks.html

its exactly about image manipulation that the comparisons are done. As usual you are not interested in anything that does not support your view that PC hardware running windows with DX is a giant killer.

I previously sited the Lockheed martin aquisition of SGI hardware to run their next f16 flight simulator, this again breaks your theory of humble PC hardware being a giant killer. Why don't PC benchmarks of the type you posted do comparisons of big iron ?

Questions ?

1) What's in a supercomputer? Does a place in the top-whatever list qualify? What about google and pixar?

Look up your own list.

2) What does DX/GL have to do with supercomputers?

Are you assuming the DX and OpenGL are identical capacity. ? Super computers DO run OpenGL, how many actually run Windows/DX ? HINT 0.00000%

3) What does DX/GL have to do with non-realtime image rendering? (the "heavy" style image manipulation; say pixar, industrial light and magic, etc.)

Why doesn't your list include SGI, HP, IBM, NEC ? Are they out of the league you are talking about ?

4) What ties DX to x86 and windows? (hint: nothing - it runs on Itanium2 and G5)

Its origin and purpose. HINT, GDI was very slow in 1995. It may be ported to Apple OS and run on Windows running an Itanium(2) but its largest usage by a long way is x86 hardware. Its foolish to assume that a minor portion of the market is proof of multiport high performance swoftware.

5) What does the computer type have to do with 3D hardware acceleration? DX/GL is really about driving 3d graphics hardware - "taiwanese terrors", if you want.

SGI use OpenGL without needing taiwanese terrors.

6) Which architectural benefits does GL have over DX? I'm talking the API, not the platforms that can run either.

Architecture is based on hardware. Function calls (API for you) are almost irrelevant and again, scalability demonstrate the architectural advantage of OpenGL as it runs on PCs to very large and powerful hardware.

You are still failing on the most basic assumption level that if you control the argument range, you can pull off a con that PCs running taiwanese terrors are outperforming supercomputers. Again I will put it to you that you are out of your league.

Regards and have PHUN. :tongue:

http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd

PS: Dilute your ignorance a little with this link on the differences between XD and OpenGL.

http://www.gamedev.net/reference/articles/article1775.asp
Posted on 2003-12-10 10:35:16 by hutch--
The problem with your approach is that you continue to try and define the range of the argument to get the results you want and thats why I keep posting information that shows your assumptions are in the little league.


Funny, I thought the whole discussion started because of some statements that you made, and never managed to argument successfully.

its exactly about image manipulation that the comparisons are done.


Really? This is the first time I've seen things such as Linpack and SpecCPU2000 being called image manipulation.
There is nothing related to graphics whatsoever in these benchmarks.
Which makes sense, seeing as the Altix, and its competitors are SERVERS, not graphics systems.
Do you have any clue whatsoever?

this again breaks your theory of humble PC hardware being a giant killer.


Point out the exact post and line where I said anything even remotely resembling this?

1) 'My' list includes x86 systems in high places (3 of them in the top 10 alone). Good to see that you finally agree that x86 makes fine supercomputers, and there's no technical limitation that would prevent this.

2) Is that all you have to say? Supercomputers run OpenGL, not DirectX? That's not very technical, is it? It still doesn't say WHY supercomputers would use OpenGL and not DirectX, does it. Also, does every supercomputer use OpenGL? If not, why not?

3) The point is to explain how these APIs are or aren't being used by such software, your flashing of some hardware manufacturers' names is completely irrelevant.

4) I would like some more technical arguments... If we look at the origin and purpose of OpenGL, we cannot justify its current use either. It's not 1995 anymore.

5) HP effortlessly beats SGI with "taiwanese terrors". Although you can hardly call the Quadra FX2000 that, since it is not designed in Taiwan.
What's your point anyway? The question is more about how the same card with an x86 would be worse than when it is used in combination with another CPU. This is what you claimed, is it not?

6) You cannot disqualify DirectX because it is not competing in the same arena. "Car A is red, Car B is blue", remember? What you SHOULD be doing is giving some technical arguments why DirectX couldn't scale... How is it different from OpenGL in that respect?

Try again. Put some effort into it this time.

PS: Don't underestimate the public. I think the readers are already very much aware of who's trying to con who, and trying to steer the very argument he started with some unfortunate blabbering.
Posted on 2003-12-10 10:57:38 by Bruce-li
Fortunately I don't suffer your analysis so I don't take your opinion as reference as you do.


Funny, I thought the whole discussion started because of some statements that you made, and never managed to argument successfully.

No, it started with your smartarse comments about who knew what. I commented that x86 is ancient architecture and OpenGL runs on high end systems where DX does not. Like it or lump it they are both facts. I gave up trying to explain to you what the x86 limitations were and you don't understand the hardware well enough.

Your problem is that you carry the cross for x86, Windows and DX all at the same time and to add to it, you are now in the stage of components to try and prop up this nonsense.


Is that all you have to say? Supercomputers run OpenGL, not DirectX? That's not very technical, is it?

Yeah but its true, does your opinion matter ?


I would like some more technical arguments... If we look at the origin and purpose of OpenGL, we cannot justify its current use either. It's not 1995 anymore.

I would like to win the lottery but I don't hold my breath waiting. Try 1992 for OpenGL, stable, does not change every 5 minutes and is not dedicated to games in particular but then, most high end hardware is not used for games.


You cannot disqualify DirectX because it is not competing in the same arena. "Car A is red, Car B is blue", remember? What you SHOULD be doing is giving some technical arguments why DirectX couldn't scale... How is it different from OpenGL in that respect?

The argument you are fumbling for here is that one item does not follow from the other. What I have shown you is the difference between fact and what you want which are not the same.

What could / should /ought / might happen is not what HAS happened. OpenGL DOES run on far more systems than directX and that directX scaling is limited to operating systems written by Microsoft. You are inflating vapourware against hard facts here, big hardware runs OpenGL, kids games on PCs run directX.

You don't like the benchmarks ? Why did you ask for them for then ? Are you suggesting that SGI don't use the Altix? 3000 systems for graphics at all ? Perhaps NASA and the like actually use their secretary's x86 PCs to do the final display ?

I plopped this at the end of the last post. You can dilute your ignorance of the area some by reading this.

http://www.gamedev.net/reference/ar...article1775.asp

One thing that is in your favour is that the OpenGL running on Windows is written by Microsoft who are trying to impose directX over everything else so you may not see them do anything with it in a hurry.

Bigger Yawn but have PHUN. :tongue:
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-10 11:29:08 by hutch--
No, it started with your smartarse comments about who knew what. I commented that x86 is ancient architecture and OpenGL runs on high end systems where DX does not. Like it or lump it they are both facts. I gave up trying to explain to you what the x86 limitations were and you don't understand the hardware well enough.


You gave up? As far as I can see you never even started. You seem to confuse facts with your personal 'embellishments' on the matter. x86 is ancient, sure. OpenGL runs on high end systems, sure. DirectX does not, sure.
However, you have made claims that x86 would be unsuitable for certain purposes, just because it's ancient, then failed to quantify this assertion.
You also drew some odd conclusions about the technical limitations of DirectX from the fact that it does not run on high end systems, which you again failed to quantify.
So comments about who knew what were factual I suppose, regardless of whether you think they were 'smartarse' or not.

Yeah but its true, does your opinion matter ?


It's not about opinions, it's about you shutting your big mouth, or producing technically sound backup for the bizarre statements you keep making.

Try 1992 for OpenGL, stable, does not change every 5 minutes and is not dedicated to games in particular but then, most high end hardware is not used for games.


Okay, so let me get this straight... Since OpenGL has not changed much in 11 years time, it is actually MORE suited to modern cutting edge hardware than DirectX, which has been updated every 2 years or so?
The only explanation I could find is that this means that 'cutting edge' hardware hasn't changed much in 11 years time then either. Is that what you are trying to say then? Then perhaps we finally agree.

The argument you are fumbling for here is that one item does not follow from the other. What I have shown you is the difference between fact and what you want which are not the same.


In other words, you have no knowledge of the technology on either side, and can therefore not make an in-depth comparison, which you hide with nonsense statements as the one above.

OpenGL DOES run on far more systems than directX and that directX scaling is limited to operating systems written by Microsoft. You are inflating vapourware against hard facts here, big hardware runs OpenGL, kids games on PCs run directX.


OpenGL was around before DirectX, simply the 'older rights' would explain why it runs on more systems than DirectX... And then there are politics. Perhaps MS doesn't want their DirectX to run on other systems, or perhaps those other systems don't want MS' stuff on them.
In no way can I distill any technical limitations out of these facts that would explain why DirectX would be limited.
Technical limitations that you claimed were there. I want to hear them. Enlighten me.

You don't like the benchmarks ? Why did you ask for them for then ? Are you suggesting that SGI don't use the Altix? 3000 systems for graphics at all ? Perhaps NASA and the like actually use their secretary's x86 PCs to do the final display ?


I never asked for benchmarks, certainly not for ones that are totally unrelated to graphics, and therefore this discussion.
And yes, SGI not using Altix for graphics at all would be a safe assumption. In case you didn't study the specs yet, there is no mention of graphics hardware whatsoever. It's just a server. It probably does not even support OpenGL at all. Certainly not at the level of an Onyx4 or sv7 anyway.
There are other types of systems than visualization systems you know... And even SGI makes such systems.
Too bad that you happened to pick the wrong system in your ignorance.

Why don't you just give up? You are only making a bigger idiot out of yourself than you already have. This Altix-stuff is again pitiful. And everyone can see right through the fact that you are not arguing your original points anyway.

PS: your D3D vs OpenGL article is outdated, it doesn't discuss DX9 at all. And it doesn't go into the technical issues that you mentioned either.
Posted on 2003-12-10 11:45:38 by Bruce-li

big hardware runs OpenGL, kids games on PCs run directX.


imo making a difference between the gaming needs and the industrial needs is nonsense, they both need to blast 3d worlds and objects on framebuffers, okay, they may be things that need to look fancy for games, but it will ASWELL benefit to industrial/scientific/medical applications (otherwise it would be like saying "who needs that phong shading while flat objects are enough" or "fancy texture support is for kiddy games").

I mean, whatever power you will have, whatever beautiful your render will be with a sgi, if games can do the same, they will! so why say they are different? you say there are two different worlds using two different technologies as if this was something that would never change.

is there a fundamental difference in the nature aof a mil flight sim and a game??


Perhaps NASA and the like actually use their secretary's x86 PCs to do the final display ?


hey, if ths isnt some kind of strong despising...




(btw, there actually are times when i like prefer flatshading :) because it is so cute, but we are speaking of maximum-realism rendering here)
Posted on 2003-12-10 12:17:08 by HeLLoWorld
you say there are two different worlds using two different technologies as if this was something that would never change.

is there a fundamental difference in the nature aof a mil flight sim and a game??


We've been asking that for the past 15 pages or so, as you may have noticed :)
I think it should be obvious that he doesn't know what he's talking about.
Feel free to prove us wrong, hutch--, and explain the difference in technical terms.
Posted on 2003-12-10 12:30:57 by Bruce-li
Bruce, where do you work? Seems there is a job availible. :grin:
Posted on 2003-12-10 17:37:34 by bitRAKE
Trolling with a question rate faster than your frame rate does not win you an argument.

What I have defended is simple, big end of town run OpenGL, small end of town play with the rest.

I have also asserted what everyone and their dog knows that x86 is old and problematic architecture. The problem here is our friend does not like or understand numbers from superior technology so the argument reduces down to who uses what.

Clustering with either processors or video chips does not solve the problem for our friend as the big end of town have been doing it longer and keep selling it to high end customers.

What our friend has yet to learn is that his opinion does not change hard facts and these are that the top end of town does not run Windows, Windows size boxes and directX.

They DO run linux and OpenGL when they do image manipulation in real time and this is a fact.

Our friend has asked for the technical differences between directX and OpenGL. Notwithstanding the crappy design of COM with its multiple levels of indirection and its stability being subject to Microsoft changing it every week or so, software in the graphics arena is currently manipulating hardware through the operating systems that it runs on.

Does linux need a COM layer when it does its own hardware access ? I suggest the linux guys would not see it this way. Perhaps our friend is yet to grasp that a software system like directX does not run without an operating system and that operating system in Windows does not run on high end hardware.

Does clustering/strapping together/parallelling chips from NVIDIA make a PC high end hardware. Seems not as HP build middle to high end hardware that is NOT PC based with the HP sv1 our f3riend made reference to. There is no trickle down here, a HP sv1 is by no means a PC.

The shift our friend is trying to pull is that you can call Windows / x86 / dirextX / taiwanese terrors high end but the obvious response is high end """ PC """ systems, not high end systems running OpenGL under linux.

The argument about components our friend has repeatedly tried to introduce does not work for him either, it really does not matter if there are some common components between PCs and big stuff, be it resistors, capacitors or chips used by both, the difference is SCALE, something our friend does not appear to be able to grasp.

Limitations in x86 hardware also seems past our friends grasp and the technical data seems to be too complicated for him but to reduce it down to the kiddie level for our friend, Intel's response to the limitations of x86 hardware is called the "Itanium(2)".

Noting that the big end of town can afford either x86 or Itanium, their choice on a large scale to parallel Itanium processors demonstrates they agree with Intel here.

Feel free to tell Intel they are wrong and that x86 is superior technology but do not hold your breath waiting for them to agree with you.

Is our friend a troll ?

You seem to confuse facts with your personal 'embellishments' on the matter. x86 is ancient, sure. OpenGL runs on high end systems, sure. DirectX does not, sure.

With capitulation at this level, the rest is noise.

Do continue having PHUN. :tongue:
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-10 18:46:44 by hutch--