--OpenGL may be knobbled on a PC but it has some to do with the limitations of current hardware, use it with
--powerful enough hardware and the limitations disappear

do you mean directx hasnt got do do with the limitations of current hardware maybe??
anyway if you admit ogl is knobbled on pc dx is a good choice, isnt it? (dont get me wrong I dunno jack about both, except ddraw:) )


--Flight simulators from the 70s for military purposes had capacity that a modern PC would struggle to handle and it
--has a lot to do with the sheer difference in power between mainframes and PCs.

so what?
It is not because supercomputer use ogl that they perform best, they are also orders of magnitude more expensive than pcs, dont see what it proves. Anyway, there WILL come a day when the 70 mil flightsims WILL look ridiculous by comparison with pc games (and that day mil fightsims will be even better ofcourse - life goes on)

--no-one is going to give Microsoft a chance to do what they have done to the PC market. Too much is at stake in
--commerce, security and a number of other important issues to let them in.

... what have they one to the pc market ???
Posted on 2003-11-26 06:03:35 by HeLLoWorld
(what have microsoft done to the pc market?)
------

Anyway, regarding ia64, and apart from all the problems of calling api/ etc,
I think its not blasphematory to ask oneself the question if there still a point in writing in asm for performance benefit,

I mean , for x86 ,
its true for simd coz compilers dont do it,
i tend to think its true for non-simd because of

-processor-stage reformulation of algos
-(and not because of (i would rather say despite of!) cycle counting and hand-tuning, compilers explore all possibilities better than us)
-good use of the cache

but its not impossible that these things could be done by a program, i mean, maybe its not yet the case, but why not??
i ve heard on some risc architectures hand tuned code isnt very much better than compiled, and ia 64 is designed with compiler code in mind I ve read, so the question could be would you still program in asm if you were certain that you couldnt crunch numbers faster than your compiler by using asm.

I think I would, because even now I m not sure at all my code is faster than a compilers one, maybe for a 2d blur I can check (and YES! I have beaten msvc++ on this with normal alu, and I was happy that day :) ) but for a software 3d engine, I cant tell, I m not going to write all in c or pascal again just to check:) .

In fact it could be possible that , with xtremely complex processor, handwritten asm would be slower than compiled code.(maybe even ia64... what would you do with all these regs? dont you think your compiler will beter find a use of all them simultaneoulsly?).
Then I m not sure, but I think I still would try asm so ...

Another point is, is it easier to formulate your algos in asm or c. for many things like control blocks etc its easier in c imo,(but some ppl think otherwise and its okay), but for some other things, when you want to do some simple operations like pixel manipulations, hey I really hated C coz I knew it was so simple in fact but I couldnt formulate it simply (but I ONLY WANT TO PUT THIS NIBBLE THERE! :) )

ANd maybe also why I love asm is because I have the feeling of fully controlling the "beast" at lowest level and it makes me proud to write every byte that goes on the wire... not a very good reason maybe, but a reason anyway :)

bye
Posted on 2003-11-26 06:44:15 by HeLLoWorld
Ah yes, it's nice to see that hutch has kept up both with current graphics hardware and the APIs... *cough* No matter what you say, OpenGL is a more limited API than DX. Of course this doesn't matter much for engineering apps, the output will be plenty fast, and engineers would probably rather focus on the other aspects of their program than a fast 3D preview. And on the mainframe stuff gl was designed for, speed isn't that much of an issue either (fun note: you used to send gl commands to a mainframe, then wait for it to process the commands and give an image - gl wasn't really designed to be 'interactive').

gl is an okay api to fool around with, and with the various vendor extensions you can even achieve fine results for realtime stuff like games and demos. The API, however, is limited and doesn't give you very much control. I don't see why there would be problems in porting DX to other OSes/hardware. And nobody says you have to do a full COM subsystem with marshalling and whatnot, as long as you implement what DX needs.


OpenGL may be knobbled on a PC but it has some to do with the limitations of current hardware,

not as much that as being an API with less control.

Oh, and what about providing some screens of the 70s flight sims? I somehow doubt the image quality is far better than what high-end "gaming" PC of today can deliver.

...

its true for simd coz compilers dont do it,

Actually they do, but it's still relatively easy to outperform compilers wrt. SIMD operations as the support hasn't been there very long yet.


so the question could be would you still program in asm if you were certain that you couldnt crunch numbers faster than your compiler by using asm.

You'll always be able to at least reach the level of the compiler :) - and it's fun seeing what you can do. But it depends on the instruction set and how well the compilers optimize... one day there won't really be a point anymore. Also, compilers doing global optimizations will be better at tracking, say, register usage throughout a whole program. I'd think that this can give improvements especially on architectures that aren't as register-count handicapped as x86.
Posted on 2003-11-26 10:01:27 by f0dder
You know I remember buying a 486DX66 and at the time it was said that processor speed was approaching it's theoretical limit, so they changed the manufacturing process and bypassed the limit. I bought a P133 after that and it was blazingly fast, it was difficult to imagine an application that would need more speed, the games at the time ran well, even extremely well, on the processor and you had to think "why would you need anything faster ?". Then people started looking at the games and thought "Wow, if it can do that wouldn't it be cool to have 20 of them on the screen at the same time. And maybe instead of 640x480 put it all on a 1024x768 res.". The point is that no matter how fast you make a machine, no matter how far they push the limits of technology some guy will always want to use more than it can offer. And for that you have assembly language, it will not die because it is not needed, it will die because the OS was designed to make it slower than scripts. When the primary language to interface with the OS is a scripting language like XUML and you must translate up from assembler to acheive anything then assembler will die. That will be a sad day as the true limits of the technology will never be tested and the "tricks" that make your machine out perform it's specs will be lost forever.
Posted on 2003-11-26 10:35:00 by donkey
I screamed aloud to the old man
I said don't lie don't say you don't know
I say you'll pay for your mischief
In this world or the next
Oh and then he fixed me with a freezing glance
And the hell fires raged in his eyes
He said do you want to know the truth son
- I'll tell you the truth
Your soul's gonna burn in the lake of fire

Can I play with madness - the prophet stared at his crystal ball
Can I play with madness - there's no vision there at all
Can I play with madness - the prophet looked and he laughed at me
Can I play with madness - he said you're blind too blind to see


:grin:
Posted on 2003-11-26 14:00:35 by Hiroshimator
I guess what amuses me with the postings is the lack of comprehension of what was being argued and the assumptions about x86 hardware. I addressed an assertion that DX should be ported to other platforms, for those who appear to have missed it NON X86 HARDWARE.

Now having seen arcade gaming hardeware and software 10 years ago that would make a 10 gig PV with what will be the current video card wet itself, I can only write off the comments as inexperience.

Then there is the hardware differences with Silicon Graphics dedicated hardware and again I know guys who have owned them for commercial graphics work and they can produce frame rates while doing real time drawing of complex image data that is out of the league of PC architecture.

X86 architecture is simply too old to hold anything like the high end of graphics work and there have been and are better hardware that is not strangled by the architecture design inherant in X86 hardware.

What I suggested is that using X86 hardware to design a multiport image manipulation system is at best naive, something like trying to win a formula 1 race with a T model ford. DX runs good on the T model ford of computing but to win a formula 1 race, you simply need later technology.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-26 18:35:37 by hutch--
Hiro, don't dare included something holy as thas in this discussion ;)
Posted on 2003-11-26 19:31:51 by f0dder
Now having seen arcade gaming hardeware and software 10 years ago that would make a 10 gig PV with what will be the current video card wet itself, I can only write off the comments as inexperience.


Really now?
Two questions:
1) How much does a CPU matter exactly, when you are using dedicated graphics accelerators anyway?
2) Can you give us some factual information on this? Perhaps name one of these esoteric 'mega' games... Or even better, show us to some site that actually has some screenshots and background info?
Else I really don't see why I should believe all this, I'm sure I never saw any mega-machine like that 10 years ago.

Then there is the hardware differences with Silicon Graphics dedicated hardware and again I know guys who have owned them for commercial graphics work and they can produce frame rates while doing real time drawing of complex image data that is out of the league of PC architecture.


At my university, we used SGIs aswell... But guess what? A few years ago they upgraded their expensive dual CPU SGI workstation by a simple P4 with a regular 'game' card... It was simply faster, and cheaper...
You are probably stuck in the past... Ironically, so is SGI, just check here: http://www.sgi.com/workstations/
These aren't exactly systems that can still put a P4 with a decent 3d card to shame anymore.
SGI lost the battle against the mainstream PC hardware... x86 may not be a good architecture, but they pack a lot of brute force... As for 'game' cards... ATi and NVIDIA base their professional 3d cards on their 'game' Radeon and GeForce cards these days, and these are used in Macs (no slouches either, when it comes to graphics) or eg HP dual Itanium2 workstations aswell... So the PC game gear has effectively become the pro gear now.

X86 architecture is simply too old to hold anything like the high end of graphics work and there have been and are better hardware that is not strangled by the architecture design inherant in X86 hardware.


You probably haven't been paying attention the last... 5 years or so? Ever heard of an NVIDIA GeForce256 card? It introduced hardware T&L (that's Transform & Lighting, know what that is?), which offloaded the CPU of this heavy task...
The result is that CPUs have very little to do with graphics anymore... They basically just have to queue up the operations for the driver, and that's it.
Besides, even though x86 is not exactly the greatest architecture around, it can still deliver a lot of performance... and they offer 64 bit addressing aswell now... So how exactly would the x86 be troubled in high end graphics work exactly?

What I suggested is that using X86 hardware to design a multiport image manipulation system is at best naive, something like trying to win a formula 1 race with a T model ford. DX runs good on the T model ford of computing but to win a formula 1 race, you simply need later technology.


What makes you think that the people even considered the CPU architecture when designing DirectX?
Isn't the CPU pretty much irrelevant when designing an API?
Or are you confused, and do you mean the general PC architecture, and not the x86 CPU itself?
Well in that case you could have a point... Then again, Macs, Itanium2 workstations, and other 'professional graphics machines' also just use AGP buses and those 'pro' cards, often from ATi or NVIDIA, so it's pretty much identical to a PC, from a graphics point-of-view.

Then again, XBox is NOT like a PC... the graphics subsystem is on the mainboard and shares the main memory pool, rather than being isolated through an AGP bus, and having dedicated memory...
And guess what? MS just used DirectX on there aswell... a slightly modified version of DirectX8, mostly to allow low-level access to squeeze some more out of the hardware.

And well, if you were still comparing it to OpenGL... how exactly is the architecture of OpenGL different from that of DirectX?
Sure, OpenGL is different, as f0dder already mentioned, but that is mostly superficial, is it not? Internally they are still very much identical, don't you agree?
In fact... This might be a good time to mention MacSoft... They implemented DirectX on the Mac (yes that's one of them non-x86 CPUs that you are so fond of). They did this by wrapping Apple's OpenGL system... This allows them to quickly port popular games to the Mac... And as you can see at http://www.apple.com/games/features/, they have ported many recent, popular (DirectX) PC games to Mac...

So would you like to state some facts, or are you ready to admit that you don't know what you're talking about?
Posted on 2003-11-27 02:33:54 by Bruce-li
What an interesting view. Recollections of dedicated arcade gaming hardware cica 1992 was multiple screens on the low end stuff for 3d effects using Japanese made proprietry hardware that had no information available on either the hardware or software. Slightly out of the league of the current stuff you have in mind but then they used to cost about 10 time a current PC in those days.

To make the comparison, the worst video card I have which is a couple of years old manages about 190 frames a second in full screen at 1024 x 768 in 32 bit colour, nothing exciting by todays standard but plenty fast enough when wide screen motion pictures run at about 28 frames a second.

What you are confusing is the difference between the frame rate that can be blitted at full screen and the sheer processor grunt necessary to produce the image data. The reason why Intel introduced MMX then XMM was to try and make up the shortfall of processing power in the x86 PC architecture yet its old info that parallel processing helps with some forms of data processing, it comes from mainframes of the 70s.

I doubt that anyone contests that current x86 PCs are cheaper than most other computer hardware and where it can be used it makes sense but I will drive the point that if you are after high end image manipulation, you are barking up the wrong tree with a 25 year old architecture. There has been better, there is better and there will be better hardware than the best of x86 in the area of image manipulation.

Try this one detail, high end flight simulators manage the planet curvature, they do not need to fog the distance through lack of processor and video grunt. Give some thought to a 64 bit Apple G5 if you want to do something smart in image work, its at least a recent hardware design.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-27 06:37:19 by hutch--
What an interesting view. Recollections of dedicated arcade gaming hardware cica 1992 was multiple screens on the low end stuff for 3d effects using Japanese made proprietry hardware that had no information available on either the hardware or software. Slightly out of the league of the current stuff you have in mind but then they used to cost about 10 time a current PC in those days.


Well, the past is always better in our memories than in reality, right? ;)
Anyway, no matter what you say, you can't convince me that there was 1+ GHz hardware back in 1992, or the 70s for that matter... This was simply impossible with the state of technology at that time (think about Moore's Law, and how far transistorcount has come in 30 years, a factor 60.000?). In the 70s, people were still counting in KHz, and kb. The microprocessor as we know it, was only invented in 1971, in the form of the Intel 4004, which ran at a whopping 108 KHz initially, and later 740 KHz, with 2300 transistors, and 4 bits.
No matter how many millions of these babies you use together, they will NOT beat a modern-day x86 at 2+ GHz.
The military may have been at the cutting edge, but the cutting edge was not a factor of millions faster than the rest.

On top of that, most of the basic 3d techniques that we know today, were not even invented yet in the 1970s...
For example, texturemapping was 'invented' by Catmull, in 1974. That is regular texturemapping, no filtering, no mipmapping (From the early 80s, if I'm not mistaken), no nothing. All features that are standard on even the earliest of 3d cards.

That is just the historical side of things that makes your stories hard to believe. The fact that you cannot produce any actual names of or references to the machines you speak so fondly of, doesn't help a lot either...
Also, bear in mind that you can actually RUN the average early 90s games on PCs, using the MAME emulator suite.

What you are confusing is the difference between the frame rate that can be blitted at full screen and the sheer processor grunt necessary to produce the image data. The reason why Intel introduced MMX then XMM was to try and make up the shortfall of processing power in the x86 PC architecture yet its old info that parallel processing helps with some forms of data processing, it comes from mainframes of the 70s.


You are obviously missing the entire point of HARDWARE acceleration in 3d graphics...
By the time MMX was introduced (1995ish?), the first 3d accelerators for PCs were already emerging (VooDoo), so blitting to screen or triangle rasterising with MMX was never even necessary on PCs, not at the cutting edge anyway... Ofcourse it was useful for people with older systems who didn't have a 3d accelerator yet, but that is not the point here.

Same goes for SSE, as I mentioned above, the GeForce256 already offloaded the T&L from the CPU, so SSE was not required for most of the calculations, and the x86 architecture has no effect on the GeForce256's ability to process data ofcourse.
As for the mainframes of the 70s... They often did not have any screen output at all... They processed their data and saved it on magnetic tapes or printed it on paper... I believe the Apple I (1975ish) was one of the first machines to actually have a character display, which was later improved to allow graphics.
So their vector processors didn't help them much there... Besides, they'd still be no match for x86s of today.

I will drive the point that if you are after high end image manipulation, you are barking up the wrong tree with a 25 year old architecture. There has been better, there is better and there will be better hardware than the best of x86 in the area of image manipulation.


If I buy a high-end HP dual Itanium2 workstation, I get an ATi FireGL AGP card, the exact same one that I could fit in a PC, and it uses a GPU which is based on the Radeon-series, which many PCs actually use anyway. Same features, only different drivers, and higher clockspeeds.
So where exactly would I be missing out, if the hardware is the same?
And again, who cares about the CPU, when you use a hardware accelerator?
Besides, the fact that x86 is 25 years old has no effect on its actual performance, because the inefficiency of the architecture is compensated by today's technology in both clockspeed and the actual implementation. x86 CPUs run on the highest clockspeeds by far... Over 3 GHz... The closest non-x86 competitor is probably the PowerPC 970, which runs at about 2 GHz now. But that is not exactly a big workstation CPU, it's just the PowerPC equivalent of the x86.
Workstation CPUs, like the Itanium2, are at 1.5 GHz at most. So what x86 lacks in efficiency, it mostly makes up with raw speed anyway.
Sure, the Itanium2 is faster, but the margin is not that spectacular really.

Try this one detail, high end flight simulators manage the planet curvature, they do not need to fog the distance through lack of processor and video grunt. Give some thought to a 64 bit Apple G5 if you want to do something smart in image work, its at least a recent hardware design.


There are plenty of terrain engines with smart LOD available on PC, that can render entire planets in realtime, WITH curvature obviously, since you can watch the planet from outer-space, as a sphere. Also, the fog is just an effect, what you probably mean is far-plane clipping/culling... and well, you can trust me on this one... EVERY graphics system uses far-plane clipping/culling. Especially if you render curvature anyway... you can't look infinitely far ahead, because at some distance, everything drops below your viewpoint anyway. Better to eliminate it, right?
In fact, often extra occluders are used, to eliminate geometry behind other geometry aswell. Good thing those graphics programmers aren't as naive as you are, and don't assume their hardware can just handle anything you throw at it, because it can't :)

As for Apple G5... Guess what display cards are in there? Radeon and GeForce.
Also, the CPU tests remained inconclusive, they did not clearly beat the latest x86, in fact, they could barely keep up most of the time.
So while it has a nice recent hardware design, it's not necessarily the best choice, a decent x86 with a decent 3d card will be faster.

Oh, and I still think you need to actually post some information about these marvellous super-mainframes and arcade-machines, because I've never seen them, and I find it hard to believe you on technical grounds.
Posted on 2003-11-27 07:52:29 by Bruce-li
I wonder why the US army is 'playing' ;) with Jane's flight sims (running on x86 hardware), and why MS flight simulator is being used to teach civil pilots...
Posted on 2003-11-27 08:28:06 by f0dder

I wonder why the US army is 'playing' ;) with Jane's flight sims (running on x86 hardware), and why MS flight simulator is being used to teach civil pilots...


I wondered about this, so I emailed a buddy in the US Airforce. He says they use Silicon Graphics Onyx systems for their new flight simulators. I am not sure if that is x86 based. I know for a fact that up to the late 80's at least the RCAF used Silicon Graphics Iris systems.
Posted on 2003-11-27 09:48:15 by donkey
Hm, seems like I misinterpreted something and jumped to conclusions, then. Afaik Jane's are developing simulation software for the army - since they also do simulation stuff for x86 (civilian/game kind of stuff), I thought the army stuff would be x86 as well - no reason for it not to be, imo.
Posted on 2003-11-27 09:52:29 by f0dder
You could still be right, the hardware for the flight sims is determined by the manufacturer, my buddy is with the Raptor program and they are using that. The Army is very different from the AirForce in the US so they not only have different birds but completely different (though supposedly integratable) systems.
Posted on 2003-11-27 10:44:44 by donkey
Some interesting information for hutch--:

Pixar goes x86: http://www.xbitlabs.com/news/cpu/display/news6421.html

History of computergraphics in the 70s: http://www.geocities.com/CollegePark/5323/1970.htm

I don't see a lot that agrees with the things hutch-- claimed so far... I do see many things that are contrary to his claims though...
Such as how in the 70s, computers weren't powerful enough for realistic computer graphics yet, not even offline, let alone realtime... In fact, many of the techniques were just invented in the mid-70s, surely they didn't have robust, efficient hardware implementations for them right away?
I don't see any mention of hardware acceleration at all, actually.
I do see that they mention SGI in the 80s: http://www.geocities.com/CollegePark/5323/1980.htm
They are probably the first to make dedicated graphics acceleration hardware?

These are some facts to support my story, if you have any facts to support your story (how about discussing some of your own graphics work, you seem to have worked on graphics mainframes in the old days?), feel free to share them with us.
Posted on 2003-11-27 10:53:02 by Bruce-li
http://www.google.com/custom?num=100&hl=en&lr=&ie=ISO-8859-1&newwindow=1&safe=off&cof=AWFID%3Ab437de8a59c4667f%3BL%3Ahttp%3A%2F%2Fwww.nps.navy.mil%2Fimages%2FGoogleNPS.gif%3BLH%3A106%3BLW%3A760%3BAH%3Acenter%3B&domains=nps.navy.mil&q=Silicon+Graphics+Intel&sitesearch=nps.navy.mil

...it can be clearly seen the US Navy is using SGI workstations (non-x86) presently. OpenGL is fully implemented in hardware on these very high end systems - they don't need to fake many of the effects used on highend PC graphics. The Naval Postgraduate School is only a couple miles away and their work on weather simulation is truely impressive.
Posted on 2003-11-27 12:05:43 by bitRAKE
OpenGL is fully implemented in hardware on these very high end systems - they don't need to fake many of the effects used on highend PC graphics.


Could you please elaborate on this? What kind of effects do you mean, and how would they be faked?
Also, how exactly is this OpenGL implementation different from the PC version (eg. what kind of functionality is not implemented in hardware on the PC versions, and how does this affect them)?
Posted on 2003-11-27 12:12:21 by Bruce-li
OpenGL has been around (fully implemented) for some time on the PC. As far back as the permedia 2 chip from dupont-pixel (now 3DLabs) a nearly full OpenGL implementation in hardware, this was around the time of the 486.

The difference between modern gaming hardware and old SGI hardware (and modern professional hardware) is precision. Internally you'll find gaming hardware can live with lower precision, because the 100fps in Quake 3 means that a vertex being placed 1 pixel to the left isn't noticed. In military, and design work, you really do care about being 1 pixel to the left! There are plenty of documented cases (look at viewperf comparisons) of older cards from both ATI and nVidia rendering incorrectly on professional test apps, although this may have improved with more recent crops of cards. SGI provides a known rendering platform, who's precision internally is much higher than those of the games apps (where they will use more than 32 bit floats so that the final result rounds more accurately for example).

Also the design point of the games cards are mainly for texture performance, games consist of relatively few vertices, and most of those are obscured by nearer planes. While professionals care about rendering huge wire-frame apps, where nothing can really be culled geometry-wise. It is a different focus.

The military use both made-for-x86 chipsets (in their own boards, and on x86 platforms), and bigger boxes. I guess it depends on what they want to do. It's a case of big iron for big tasks.

The point that low-clock == low performance is a misnomer.
It was estimated that colossus (the station X machine at Bletchley park in the UK) which cracked the enigma code the Germans used would out-pace a Pentium doing the same task, and it was of course valve based. Clock for clock I'd guess the pentium would have the better clock speed. If you wanted to play Quake, the Pentium would win hands down. When it comes to cracking the engima code, screw the Pentium the room filling, valve based, ticker-tape fed monster the British used is the weapon of choice!

Pixar go x86... This is because they are rendering on big-ass clusters, nothing to do with the graphics hardware onboard. Don't believe nVidia's hype about their rendering frames of "Final Fantasy: The spirits within" in real time (a claim they made, and trotted out a renderer at shows like Comdex), when it was looked at, they cheated, selected special scenes, used cut-down physics and pixel & vertex shaders (compared with what was needed for the direct comparision), and at a lower resolution.

Mirno
Posted on 2003-11-27 12:46:57 by Mirno
Internally you'll find gaming hardware can live with lower precision, because the 100fps in Quake 3 means that a vertex being placed 1 pixel to the left isn't noticed. In military, and design work, you really do care about being 1 pixel to the left! There are plenty of documented cases (look at viewperf comparisons) of older cards from both ATI and nVidia rendering incorrectly on professional test apps, although this may have improved with more recent crops of cards.


Even early VooDoo cards already had 3 subpixel bits... All PC accelerators today have at least 4 bits...
There is no case of "a vertex being placed 1 pixel to the left".
Sure, some early hardware was not very precise, but that was mostly in the blending modes, and lack of subpixel (either in software or hardware).
But that is a thing of the past, since modern 'professional' cards are based on the same GPU as the 'game' cards. In fact, it could be that 'game' cards are actually more accurate these days, because they use floating point pixel pipelines, and I don't know of any 'professional' hardware that does so (not even 3dlabs offers these, as far as I know. Not that it matters though, since they are AGP cards, that fit in x86 PCs just fine).
From what I can see on the SGI site, 12 bit per component integer is the best they can do... That is DirectX 8 spec.

Also the design point of the games cards are mainly for texture performance, games consist of relatively few vertices, and most of those are obscured by nearer planes. While professionals care about rendering huge wire-frame apps, where nothing can really be culled geometry-wise. It is a different focus.


I beg to differ... Consider the ATi R3x0 series, which features a 4-pipeline high performance vertex shader array.
It can handle an immense amount of polygons. Besides, you got it backwards, if you render wireframe, you CAN'T use very high-resolution meshes, because each line is 1 pixel thick... Before you know it, all pixels are covered, and it pretty much looks like a solid model anyway. An R3x0 has no problem whatsoever with handling a situation where you have as many pixels as you have vertices.
And there is a shift from texturing to pixel-shading, here the R3x0 series differs from the NV3x series. R3x0 is aimed more at arithmetic than texture, NV3x is the other way around. Guess which one performs better in the latest games?

When it comes to cracking the engima code, screw the Pentium the room filling, valve based, ticker-tape fed monster the British used is the weapon of choice!


Exactly, dedicated hardware is always much faster than generic hardware trying to do the same task... it's like saying a real C64 runs better than a PC running a C64 emulator.
Which brings us to the next point... Dedicated hardware for graphics vs software-rendering...

Pixar go x86... This is because they are rendering on big-ass clusters, nothing to do with the graphics hardware onboard. Don't believe nVidia's hype about their rendering frames of "Final Fantasy: The spirits within" in real time (a claim they made, and trotted out a renderer at shows like Comdex), when it was looked at, they cheated, selected special scenes, used cut-down physics and pixel & vertex shaders (compared with what was needed for the direct comparision), and at a lower resolution.


No, Pixar go x86 because the CPU-power is relatively cheap compared to alternatives, be it cluster or mainframe, contrary to what hutch-- would have us believe.
Ofcourse they don't use the graphics hardware onboard, they use a software-renderer, that doesn't even use the same approach to rendering as graphics hardware does... Which makes it rather logical that NVIDIA's real-time rendering is not exactly the same as the original. But you have to look through the marketing there.
NVIDIA does have a point in that their hardware can now approach professional offline-rendered results...
NVIDIA also has a point in that part of the offline-rendering process could be offloaded because the hardware can perform the same tasks and obtain the same results, in less time, and at a lower cost... After all, that is the purpose of the hardware.
Anyone who believes NVIDIA is doing exactly the same in realtime as what Pixar took months to render on a huge farm, is just plain stupid. NVIDIA's hardware is good, but not that good.
What it comes down to, is that things like raytracing are not (yet?) possible to do efficiently on dedicated graphics hardware, so software renderers will have that 'edge' on hardware in terms of realism and accuracy.
Not to worry though, work on dedicated hardware for raytracing is already being done. It's just a matter of time.
Posted on 2003-11-27 13:21:06 by Bruce-li
Hmmmm,

I cannot really help you with the reference for arcade gaming hardware in the early 90s because among other things it was proprietry hardware and software that was never published and involved big bux to produce.

The basic hardware and software used to cost about 25 grand and you had to build the cabinets yourself but on the income end they used to take about 2 grand a day so you paid for them in under a month.

Low end stuff used multiple monitors, usually 3 and sometimes with a 4th to provide overhead viewing as well but the more expensive arcade games used back projectors on a partly spherical screen and the speed and field depth made current PCs look like the kids toys they are in the image manipulation area.

I will address a number of the assumptions in your response that have wandered far from what I have maintained in the first place that x86 hardware is not a model platform for cross platform software development.

The idea that an "API" is not or should not be platform dependent assumes cross platform code development which usually means what you can produce in libraries for a C compiler yet it is less than a secret that fast code on most platforms is written in native assembler. Some compilers are a lot better than others but cross platform porting is by no means a simple matter in performance terms. OpenGL is a good example where x86 hardware is not fast enough.

Reference to different hardware platforms tends to make the point I argued in the first place, be it dual Itaniums, G5 macs or whatever else and I suggest that a software system designed primarily for x86 hardware is not a model to use for other far more sophisticated hardware designs.

The next is the association of processor grunt to image display capacity. Hardware acceleration has helped make x86 PC display faster but there is still a difference between what your fastest blit rate is to a screen and how much processor grunt you need to produce the image data.

A test of something like screen resolution x colour depth looks something like 1024 x 768 x 4 bytes multiplied by the frame rate per second. This will give you the data throughput possible with the present hardware you have available. Then you look at what it takes to produce the image data on a frame by frame rate.

Simple image data in something like a VOB file is not much more than streaming data in from a file and blitting it to the screen and a predetermined frame rate but start working from a real world height map, with or without texturing, process the information to make it recede in the distance and you are starting to run out of grunt on a current PV 10 gig PC.

What you trade off is speed against detail so you either end up with snails pace display or lousy precision and the limit is in the sheer processing grunt of a current x86 processor. I still own somewhere a couple of directX games for about DX7 that I used to run on my old PIII and they worked fine and the display seemed to be reasonably fast and close range precision looked good but the receding distance was progressively poorer which was covered up by fogging which looked even worse.

For a game it was fine but for critical high speed simulation, it was some powers off the pace in technical terms. The "gee whiz" element of what the next accelerated video card may be able to do does not solve the problem of an ancient architecture in x86 hardware, the well known bottlenecks in performance and the fudges required to improve the performance.

x86 hardware has a couple of major advantage, no single economic interest controls it with multiple processor manufacturers, a large range of accessory manufacturers in HDDs, video and other accessories necessary to build a PC and backwards compatibility with software written before 1990. You may remember IBM flopped when they tried to regain control of the x86 PC market with microchannel architecture because no-one trusted them.

Price, backwards compatibility and familiarity are what keeps x86 as the major player in the PC market but that does not give you performance that an ancient architecture strangles.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-27 19:39:26 by hutch--