Bruce, some men you just can't reach.

I wonder if hutch hasn't read the previous posts, or if he just doesn't grasp the content.

I fail to see where hutch has posted any 'technical data' - unless he considers hacking at things without any quantitive backing, 'technical data'.

Perhaps hutch should stick to the things he knows about: PowerBASIC, writing superfast HAND OPTIMIZED ASSEMBLY string scanning algorithms that can beat ANY HIGHLEVEL algorithms WHATSOEVER, and dabbling with philosiphy and political ramblings. And preferably on his own ego-rubbing board, and let us have peaceful and techical discussions here.
:rolleyes:
Posted on 2003-12-03 09:26:28 by f0dder
Same response as before, read my previous postings for the technical data but the argument you started is over. If you have not spoken truthfully about your own knowledge, finally thats your problem, not mine. :tongue:

Same old stuff, direct(whatever) was developed to get around limitations of Microsoft OS design.

"Then why can you not name a single one of these limitations,"

HAL :tongue:

Am i taking you seriously ? Arrrrgh ! No. This argument is over.

Now how many x86s do you need to strap together to make a supercomputer ?

Weep as you calculate the number in the face of OpenGL running on the real high end boxes while direct(whatever) paddles away on Windows boxes and if you are right, a MAC.

Lockheed Martin Selects SGI Graphics Supercomputers for two Flight Simulator Projects

http://www.sgi.com/newsroom/press_releases/2003/december/lockheed.html

I wonder why they did not construct their next flight simulator out of a million cheap x86 chips ? Is it they cannot run the SGI OpenGL on x86 or is it just the superior overall technology ? :grin:

I wonder why Lockheed Martin, the JPL and other miss you so obvious facts that they could build cheaper faster supercomputers in the graphics area with x86, directX and a cheapie graphics card. These guys must be a bunch of real dorks if what you say is true.

Muhahahaha
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-03 09:37:41 by hutch--
HAL


That's a good one... Then you still have to explain why other OSes use similar HALs (including OpenGL yes), and how these HALs differ from the Windows-ones. See? This is yet another loose statement with no technical value whatsoever.

Am i taking you seriously ? Arrrrgh ! No. This argument is over.


Then why are you still replying?

I wonder why they did not construct their next flight simulator out of a million cheap x86 chips ? Is it they cannot run the SGI OpenGL on x86 or is it just the superior overall technology ?


That is exactly what I would like to know... You say you know the technical reasons why. Name them.

I wonder why Lockheed Martin, the JPL and other miss you so obvious facts that they could build cheaper faster supercomputers in the graphics area with x86, directX and a cheapie graphics card. These guys must be a bunch of real dorks if what you say is true


I just say what Microsoft's supercomputer page is saying, as you have seen. And I am just saying what Pixar and google are doing.
Besides, I don't think I've specifically used the combination "x86, directx and a cheapie graphics card".
You seem very confused...
YOU said 3 things, 3 UNRELATED things.
Namely:
1) x86 is impossible to use for supercomputers.
2) DirectX cannot be implemented on anything else, and cannot compete with OpenGL for technical reasons (which ones are those again?)
3) 70s military flightsims and 90s arcade machines were tons faster than even the best PC graphics card today, or in the future, when we have PV 10 GHz PCs.

I simply asked you to clarify, and gave you some factual info on how x86 is actually used for supercomputers, DirectX actually is implemented on other systems, and how the 90s Japanese arcade systems that I could find information on, were nothing like what you described. The entire thread is still available, perhaps you should read it again, then it may become more clear what I said and what I didn't say... And ofcourse also what you claimed, and what I found hard to believe, and where I asked you to supply some supporting facts...

Read the thread again, then answer the questions, because as you see, they relate directly to your statements and your credibility, unlike most of the rubbish you have been posting lately.

Should I take you seriously? Probably not... Everytime you are asked to clarify anything you say, you sidetrack and get arrogant and insulting. Clear signs of someone who cannot back up what he is saying, and tries to use dirty tactics to try and 'win' the argument.
I repeat once again, it's not about what I said, it's about what YOU said.
So far I (and apparently other readers/participants in this thread) can only conclude that you don't know what you are talking about, and are a sore loser.
Posted on 2003-12-03 09:51:49 by Bruce-li
Here are some numbers for you,

Lets see yours for Windows x86 PCs. :tongue: Feel free to mention a G5, Itanium or x86 Windows box.

Onyx4 UltimateVision Extreme

Visualization Specifications*
Fill G pixels/sec no FSAA Up to 76.8G pixels/sec
Fill G pixels/sec with FSAA Up to 51.2G pixels/sec
Polygons/sec Up to 4800 M poly/sec
Display resolution Up to 99.2M pixels
Display channels Up to 64
Graphics memory Up to 8GB


Having PHUN are we ?


Then why are you still replying?

I have been writing test code all day and I am brain dead. this is a good way to put me to sleep after. :alright:

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-03 09:58:43 by hutch--
Its just me or this got so boring?

Hutch, may you should answer those questions and finally end this ...

Just a comment.
Posted on 2003-12-03 09:59:35 by Eternal Idol Birmingham
1) the Onyx4 is not in the same league as a G5, Itanium or x86 Windows box in any way. So comparisons are useless. It is also not relevant to your statements. And if you divide the Onyx specs by the number of visual processors it has, you end up with something remarkably similar to the ATi R3x0 specs. Besides, there is no reason why Onyx could not run DirectX, is there? In fact... How expensive is that Onyx4 anyway, and how big an Itanium2 cluster could you build for that amount of money? (Not that you'd use an itanium2 cluster for the same kind of job, probably - with that fillrate, the Onyx4 does seem like a good tool for the job - if the same can't be achieved cheaper).

2) Do you really think OpenGL is used for the real heavy-duty stuff? Do you think it could render "Finding Nemo" for example? (Hint: where's OpenGL support for NURBS, procedural texures/objects, raytracing, radiosity, photonmapping, non-linear projection, proper softshadows, correct handling of translucent objects, with shadows, correct handling of reflections, refractions (OpenGL can do render-to-texture, but as we all know, this has problems with accuracy and quality, and it is not efficient for recursive reflections/refractions)...). It's also worth to note that GL, like DX, does triangle rasterization - so while the Onyx4 box has a quite nice hardware accelerated rendering subsystem, which is fine for engineering apps, it would still be limited by main CPU muscle if it came to movie rendering - and then some other cluster might be more cost efficient?

3) Nobody said DX is available for the most powerful rendering hardware - we claimed that there's no reason why it couldn't be. You showed some, admittedly, impressive systems - but nothing to back up your statements that DX would be unsuitable for these.

4) If we are at "Look who's using what" anyway, why was the "Rendering with Natural Light" animation by Paul Debevec (posted earlier by f0dder) rendered on an Itanium2 cluster, and not on an SGI box?

5) The right tool for the right job. Nobody said x86 was the best supercomputer - but you said one couldn't build a x86 supercomputer, which is clearly wrong. They just happen to be aimed at other tasks - scientific number crunching (sdu.dk, google.com), non-realtime rendering of video data (pixar).
Posted on 2003-12-03 10:42:54 by Bruce-li
at the beginning I laughrd my ass off.
now...

from now on you could just reply to each other by posting things like:

"To answer lines 12 to 17 of your post number 25, I repeat the things I said in my post 22, lines 35 to 41",

or , more stylish,

you25(12-17)?haha!->me22(35-41)!
you25(22-23)?ridiculous!->me22(35-41)take that!

just beware of recursive loops :grin:
Posted on 2003-12-03 10:56:46 by HeLLoWorld
Yes, it's rather sad that we get the same old arguments back at us, that don't relate to the questions we asked, regarding the early statements...
It would help if hutch-- could actually answer any of these questions... Then again, I think we all know by know that he cannot, and I think we all see right through the smoke-screen of insults, patronizing, arrogance and non-info that he tries to pull up, trying to cover this.
Posted on 2003-12-03 11:04:04 by Bruce-li
quote:
// "This Onyx4 system will be capable of powering over 120 megapixels of screen area and has a fill rate of over 40
//gigapixels per second, which is enough pixels per second to put a new image on the average screen of every
//computer in the world nearly 5000 times a day."

//Lets face it, does a kid playing a directX game on his PC need to have a pixel production rate capable of updating
//every computer screen on the planet in real time.

I wouldnt consider 5000 fpd* as "realtime" :grin:

*note : fpd , abreviation (c) 2003,HeLLoWorld : "frames per day" :grin:


(okay, okay, 40Gpix/sec is still nice though)
Posted on 2003-12-03 11:05:07 by HeLLoWorld
5000 fpd ==approx. 1frame every 17 sec! :grin:
Posted on 2003-12-03 11:07:59 by HeLLoWorld
i ve read ONCE that dx was bought by microsoft from a small company (an english one i think) and was continued by microsoft, and that it was VERY VERY bad back in these times. but whatever origin DX may have , it s not the same thing anymore, its evolving and maturing (example:dunno bout execute buffer, but i ve been told they was a crappy legacy from the old times, and that now doing without is best) like any software or product that has thousands of dollars to push it forward.

Whats wrong? even if it had been designed with 32bx86 in mind , (I absolutely dont know, but ppl say no) , what does it change to its state now? versions after versions, interfaces evolve, and they are just interfaces, not implementations... so the implementation is another matter...

besides, maybe dx was designed to do what win couldnt(because win wasnt meant to do it, and not because win was crappy imho) , but it was also designed to allow to do it the same way on different hardware... so saying that "when you ve no win you dont need dx to access hardw, just /* X="ly" */access hardware DIRECT(X) (okay its so lame maybe nobody will understand theres a lame joke :grin: ) " is not a good way of putting down direct i think... coz without dx you then have to support all these accel funcs for all these cards yourself...

But thats what hal and drivers and apis are all about, btw in previous posts i wondered if one couldnt vastly simplify apis and programming and OSes (maybe even suppress drivers or apis) by making wider and stricter and better hardware standards... i thought that wasnt done for compatibility s sake and protection and all... but bogdanOntanu said it wasnt done because of hardw manufacturers hiding their specs... that seemed strange, i never thought bout it , but maybe its true...

Anyway before I hardly saw once that dx >gl... now thanks to hutch, and bruce and the others I have tech reasons why this should be...
Posted on 2003-12-03 11:34:04 by HeLLoWorld
quote:
//Also, while nobody has said 3D accelerated hardware should be used for final product rendering (see reasons
//given above), it can be used for previews, and some pretty impressive stuff. You might want to have a look at the
//following two URLs to see some examples:
//http://www.daionet.gr.jp/~masa/rthdribl/
//http://www.debevec.org/RNL/
//The second shows that a standard middle-end Radeon 9700 card can be used for a real-time (and decent frames
//per second) approximation of something that took "a while" to render on a high-performance cluster.

thnx for these links!

....wow.....this.....is....amazing.....!

my eyes hurt!
since a few months I ve come to think rtrt was the future... but those dear old triangles are still gonna rule the world for a long time producing so fucking beautiful graphics!
Posted on 2003-12-03 11:40:40 by HeLLoWorld
i ve read ONCE that dx was bought by microsoft from a small company (an english one i think) and was continued by microsoft, and that it was VERY VERY bad back in these times. but whatever origin DX may have , it s not the same thing anymore, its evolving and maturing (example:dunno bout execute buffer, but i ve been told they was a crappy legacy from the old times, and that now doing without is best) like any software or product that has thousands of dollars to push it forward.


I doubt that it was bought from another company, since DirectX required modifications to the kernel and driver model. Only MS could do that... Perhaps some other company proposed the idea, but I doubt they could actually build a working DirectX without MS.
As for execute-buffers... that's pretty much how early hardware worked... MS asked some leading game developers (including John Carmack) for advice on how to design the API... Carmack wanted a low-level API, as long as it was fast... He said something like "If I could code on the bare metal of the cards, I would, as long as it's fast" (you can find the actual quotes somewhere on the net, I'm sure).
The sad part is, when MS actually came up with such an API, Carmack turned out not to be the tough guy he pretended to be, and went for the highlevel OpenGL instead, which is the opposite of what he asked MS for.
Most other developers have used Direct3D from the beginning, and performance and quality have always been reasonable, so it wasn't all that bad really.
Execute buffers only existed in the first few versions of Direct3D anyway, I believe they were abandoned in DirectX5, and the DrawPrimitive() was introduced... That's the difference between OpenGL and Direct3D. If the hardware changes, Direct3D changes with it. OpenGL remains static, and therefore somewhat unoptimal, hence hacks (aka extensions) need to be provided.

Whats wrong? even if it had been designed with 32bx86 in mind , (I absolutely dont know, but ppl say no) , what does it change to its state now? versions after versions, interfaces evolve, and they are just interfaces, not implementations... so the implementation is another matter...


You know that, I know that, f0dder knows that... But hutch-- doesn't know that, apparently.

But thats what hal and drivers and apis are all about, btw in previous posts i wondered if one couldnt vastly simplify apis and programming and OSes (maybe even suppress drivers or apis) by making wider and stricter and better hardware standards... i thought that wasnt done for compatibility s sake and protection and all... but bogdanOntanu said it wasnt done because of hardw manufacturers hiding their specs... that seemed strange, i never thought bout it , but maybe its true...


This is an interesting dilemma... If you make wider hardware standards, this basically limits the freedom of the hardware developers. They are going to make mostly the same hardware then, and it will be hard to compete on a technical level, since you can't easily add or improve features, and make them available to the OS.
Hardware manufacturers need to hide at least some of their specs anyway, because they don't want their competitors to copy what they invented (they may have implemented the same function in a cheaper, faster or more clever way).
The hardware abstraction layers should make it both easy to program different hardware, and should make it possible to use specific hardware-features... These two things sometimes collide, especially when it comes to graphics. That's one reason why Direct3D is nice... You can check if a feature is present (the device caps), and use it if it is, or use an alternative if it isn't. OpenGL instead forces the manufacturers to put software emulation in for all unsupported features.
This means that it's easier to program in a way, because you know it will work... But you don't know HOW it will work... So if you want it to work RIGHT, Direct3D is the easier (or perhaps even the only) way.
Posted on 2003-12-03 11:51:44 by Bruce-li
YES! I have found it again!
the second link is a page where it is said tha DX originally wasnt a microsoft product.

http://www.scorpioncity.com/djdirectxtut.html
http://www.scorpioncity.com/dj1.html

quote:
// It was originally purchased from a London company called RenderMorphics, and quietly released more or less as
// is as DirectX 2. DirectX 3 was probably the first "serious" release by Microsoft

There are also opinions that go somewhat in your way bruce, somewhat not...

thnx big time for the history of execute buffers.


//This is an interesting dilemma... If you make wider hardware standards, this basically limits the freedom of
//the hardware developers. They are going to make mostly the same hardware then, and it will be hard to
//compete on a technical level, since you can't easily add or improve features, and make them available to the OS.
//Hardware manufacturers need to hide at least some of their specs anyway, because they don't want
//their competitors to copy what they invented (they may have implemented the same function in a cheaper,
//faster or more clever way).


interesting also... I dont see why this should be different with api. in my perfect world there would be hardware with integrated software, like bios, except it would implement the api, that is, it would contain the drivers that are usually provided with devices now. But thats doesnt make it to the api level I think; a DX call isnt just a translation to a windows driver call ? enlighten me please. If not, what is there between?

excuse me but i m _really_ interested in knowing if it would be a valid system to have hardw standards making it to the app programming level... whether it will happen or not because of commercial reasons is another matter... but could it be efficient , while still providing compatibility at a given time, backw compatibility, and be flexible enough to allow evolution?

You said "so, no new features". but thats the same with api! just that the api gets extensions as hardw gets capabilities. why couldnt we have would have firmwares that all implement a specification, and from times to times the specification is changed and ratified ? this even allows periods of transition where hardware manufacturers publish the top level api of their new functions and say "if someone wants to specific-code apps with this then go on" . maybe it is that this embedded soft would be too big... come on , i dont think so (maybe i m wrong).

ofcourse its a bit hard to deal with unimplemented features... but the device can report it doesnt handle it and then , only then , we need software package of emulation...

well...


//You can check if a feature is present (the device caps), and use it if it is, or use an alternative if it isn't.

this seems a bit crappy to me, coz youve got to handle all cases in each app you write, but i dont know the subject... yes it sounds more work, but more flexible...

//OpenGL instead forces the manufacturers to put software emulation in for all unsupported features.

do you mean soft emu in their driver code, and using the cpu? so its even not ogl that handles it when its not there?


bye
(oh , and i didnt want to hurt someone by making a poll , in fact it has been removed which was a good thing cause i wouldnt have wanted to post a war-style poll. i was just laughing at how the arguments were never ending, and thought it would be very funny. apologies if needed. )
Posted on 2003-12-03 14:49:20 by HeLLoWorld
I once though about "drivers in hardware" too, but... with PCI, AGP and PCI-X busses, there's this little catch: the hardware is not limited to x86. So it would be hard to put a BIOS on the hardware to provide an API, and I don't know if the PCI/whatever standards have any way to "execute commands" on the hardware. Furthermore, it's not guaranteed the hardware BIOS would suit the OS it's running under (running VESA under a 32bit pmode OS requires some virtualization that's not all that funny). Imo, it's better to do as now, and have software drivers - it's also easire to update a software driver than flash firmware, especially for end-users.

DirectX comes in multiple levels... there's the ring3 API parts, especially the helper d3dex (or what it's called) code is ring3. Iirc, the model is made so there's a lot of common code in the DirectX you download from microsoft, but no hardware-specific code as such - the video drivers have to support certain features to have DX acceleration, just like they need certain features for GDI acceleration. Iirc, there's a lot of documentation about the specifics in the NTDDK. It's been some years since I glanced at it, but it seemed like a nice way to handle things.

So, DirectX is quite a lot more than just calls to driver code.

I like the way new features are added to DirectX better than the way it happens with OpenGL... the OpenGL commitee is very slow, so to use any extensions, you lock yourself to the vendor specific extensions you choose to support. With DX, the featureset is determined by microsoft and a bunch of hardware designers, added as a part of the DX specification, and hardware vendors then have to support this in their drivers - so to use DirectX 9 features, you 'just' need a DirectX 9 capable card - you don't have to code specific codepaths for eg nvidia, ATi, <whatever>. There's a couple of new cards hitting the market soon, so this is rather relevant (2 codepaths might be managable, but four including debugging and performance testing == annoying).

The capability querying of DX is nice. Either you can use it just to check if necessary features are available, and say "upgrade your graphics card" if they're not - this works a lot better than just checking for certain vendor extension strings, as OpenGL lets you. If you're making a somewhat bigger engine, you can use the caps-querying to enable "eye candy" features if they're available, but let the game run if not. Querying for texture memory available etc lets you optimize the texture set to use, to have less (expensive) texture loads, etc.


do you mean soft emu in their driver code, and using the cpu? so its even not ogl that handles it when its not there?

Try using a feature in OpenGL that's not fully hardware accelerated, and you'll hit around 1fps. The problem here is that you aren't really able to tell if a certain feature is supported in GL - this isn't as bad as it used to be, but back in the days of GL miniports for not-so-powerful hardware... oh boy :)

Btw, the board supports quote and /quote, inside square brackets ([ and ]), instead of manually backslashing lines - oh, and there's a private message feature on the board :)
Posted on 2003-12-03 15:16:32 by f0dder
in my perfect world there would be hardware with integrated software, like bios, except it would implement the api, that is, it would contain the drivers that are usually provided with devices now.


I guess that would make the hardware much more expensive and complex... Perhaps also harder to update drivers.
I suppose you mean that a ROM on the hardware stores the driver, and the OS downloads it from the hardware and installs it?
If the hardware will actually have to run some of the API code itself, it would require a processor.
The legacy BIOS already does this... there are interrupt handlers that are installed by the different hardware by mapping their BIOS into the CPU's addressing space.
These interrupt handlers are the API for the 'driver' in a way.
But this is very limited, ancient, and hard-to-use technology...
If want to bring it to OS-level you would have the problem that it becomes OS-specific... On PCs this doesn't work, because people want to choose the OS themselves (and also be able to upgrade).
On proprietary systems like Macintosh and Amiga, this is a more viable approach.

But thats doesnt make it to the api level I think; a DX call isnt just a translation to a windows driver call ? enlighten me please. If not, what is there between?


Well, that depends a bit on what call it is. Some calls map directly to driver functions, other calls may have some 'neutral' code that is not handled by the driver, but by DX itself.
But yes, the view of DX being an interface for the underlying driver is correct in general.
You can think of DX as a docking-station... Any hardware-driver can 'dock' to it, and then the functions can be used through DX, which functions as a bridge between OS and driver.

excuse me but i m _really_ interested in knowing if it would be a valid system to have hardw standards making it to the app programming level... whether it will happen or not because of commercial reasons is another matter... but could it be efficient , while still providing compatibility at a given time, backw compatibility, and be flexible enough to allow evolution?


Hardware usually acts at a much lower level than function calls... Memory-mapped registers, buffers, and I/O ports. There's no other way of using hardware without massively redesigning the whole system, I guess. So the idea of an API that is 'static', and a driver underneath that translates function calls to the proper toggling of bits in hardware registers, mapping hardware buffers to the CPU etc, is probably the best one. This way the programmer doesn't have to know anything about the underlying hardware. And neither does the OS, so any hardware with a working driver can be used.

PS2 might be a nice example of a different approach... It has a few separate processors, with their own dedicated memory... The main processor will just upload programs for these processors, and they will execute them without CPU/driver/OS intervention. But I don't think it makes programming any easier, just much harder...
And it only works because all PS2s have the exact same hardware.

You said "so, no new features". but thats the same with api! just that the api gets extensions as hardw gets capabilities.


Well, that depends... In the case of Direct3D 9 for example, there are some API features that are not supported by any hardware yet. Or what about GDI in the old days? When stuff like linedrawing was not accelerated yet.
So when you would let all programs hammer on the hardware directly, they would all implement their own software linedrawing routine... When a manufacturer then comes out with a hardware-linedrawing card, none of the programs use it... All programs have to be rewritten.
If all programs nicely use an API-function, then the implementation of the linedrawing can transparently be replaced by a hardware-accelerated one, and all programs will automatically use it, without a problem.
So there certainly is something to be said for abstraction of functions.

ofcourse its a bit hard to deal with unimplemented features... but the device can report it doesnt handle it and then , only then , we need software package of emulation...


Yes, and generally we want this handled by the OS/driver, and not by the program itself... realtime 3d is a big exception there, I'll get to that in just a moment.

this seems a bit crappy to me, coz youve got to handle all cases in each app you write, but i dont know the subject... yes it sounds more work, but more flexible...


The thing is, if you are trying to write a high-performance 3d application, software emulation is the last thing you want. And in that case there's no other way than letting the programmer decide the best fallback method.
Hardware is so incredibly much faster than software in this case, that software-emulation is plain useless...
For example, if you simply select the wrong texture format in OpenGL, your application may drop from 500 to 0.5 fps. This is unacceptable. It's a much better solution to let the programmer query the device for the formats it supports, and then choose the best one for the task. Then you can guarantee that the application will always be fast. In most cases it's relatively easy to implement some kind of fallback anyway.

do you mean soft emu in their driver code, and using the cpu? so its even not ogl that handles it when its not there?


Well, OpenGL is rather odd... Unlike DirectX, it is not directly part of the OS. OpenGL itself is implemented by the driver (that's why it's much harder to make a proper OpenGL driver, and why performance and quality between diiferent vendors could differ greatly... Currently it's mostly ATi and NVIDIA though, and they both seem to have their drivers under control). So yes, the software emulation is in the driver code, using the CPU.
It still has to give the 'reference' quality of hardware though, so it's not just a speed-optimized software renderer. It's a very accurate and slow renderer. Not useful at all for realtime purposes.
Posted on 2003-12-03 15:28:28 by Bruce-li

PS: you might wants to ask discreet (3d studio max) and/or Alias|wavefront (maya) what they use to do their final rendering - betcha it's not going to be neither DX or GL. Hell, I'll eat my hat and take pictures of it if proven wrong. (DX/GL is obviously used for accelerating the working environment, though).


maya renderers


Full support for hardware off-screen, background, batch rendering is available from both the Maya user interface and the command line.


This is programed via a DX interface.

Mirno
Posted on 2003-12-03 16:16:03 by Mirno
Quickly generate images for pre-visualization and broadcast quality final output. This rendering option generates near software-quality images significantly faster using the power of next-generation graphics cards.


I don't think f0dder needs to eat his hat just yet... it's still 'near software-quality', so for true production quality, you still need the software. Also, if it is like Mental Ray, it can use the hardware only for part of the rendering process (first pass), and still render a part with software... but it's not entirely clear from the text how the hardware is used.
Posted on 2003-12-03 16:22:20 by Bruce-li
mmh, and

Quickly generate images for pre-visualization and broadcast quality final output. This rendering option generates near software-quality images significantly faster using the power of next-generation graphics cards.

Would be interesting to compare this hardware rendered stuff to software rendered - 3d cards *are* getting good. Sounds like the perfect thing for a fast but still decent quality preview.
Posted on 2003-12-03 16:22:35 by f0dder
I left that bit out on purpose :tongue:
I wouldn't be surprised if the cards were programmed natively, they could manage even higher quality.
AFAIK all the drivers use compilers to convert from the "asm" in DX to their native format. I'm guessing they have longer instruction limits than the DX spec enforces, and possibly more optimal code paths.

DX10 / OGL2 speced cards will really show some impressive effects.

P.S. GL hasn't stopped being updated, it just moves so slowly it just looks that way.

Mirno
Posted on 2003-12-03 16:34:01 by Mirno