HeLLoWorld,

My views on x86 hardware come from working with it for the last 14 years or so. Its not that I despise it but familiarity and knowing its history remove any "gee whizz" ideas of what it is in the face of technology that has been contemporary.

From 1980 odd to current you have had almost perfect incremental releases of slightly faster processors and while some of that is related to technical limitations, a lot of it has to do with extracting the most money out of buyers. The overclocking fad that was around in the earlier Celeron days showed that the limitation is arbitrary and for commercial reasons, not processor limitations.

The problem is that with standardisation, you not only get the advantages of compatibility and predictability but you get the downside as well which is lack of innovation, any form of originality and the subsequent out of date technology.

The PC end of video cards is well explained in the article that Rickey posted, Microsoft will hammer its own standard using its operating system as leverage, the designs will become standardised, cheap and trivial and as technology continues, it will end up out of date in comparison to other technology.

Market leverage from Microsoft is nothing new, it has used its commercial muscle for years to kill off most competition so the Windows market is almost totally a monopoly now. The price of this market success is crappy operating systems with massive security problems, backward compatibility problems, high hardware demands in terms of size, code quality and reliability and little in terms of performance increases for most software.

While Microsoft have had ambitions for years to fully control the IT industry, they have never succeeded in the high end. The big end of town don't like them, the linux guys don't need them and the owners of things like the sum total of internet routers are not willing to share anything with a company with so many bad habits.

Microsoft are still the DOS company and the only area they fully control is the market that still has its technical assumptions in the OS architecture of DOS. File formats, directory and partition structures and hardware compatibility that comes from the DOS era.

These are the things that keep x86 hardware in the stone age and as Microsoft continue to shift away from their real market strength in keeping the old stuff going, other players are becoming more competitive ads they have the expertise in non-DOS style hardware and software.

The comments on high end hardware are based on a market advantage that the big players have had since before the advent of DOS and Microsoft. They don't have price advantage and usually don't even look for it but they do have massive performance advantages that come from the scale of the systems they build.

Like many other I have been around long enough to know the difference and don't confuse kids video games with high end hardware performance.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-05 19:15:13 by hutch--
Hutch, you're right that standards are a double-edged sword - while intel processors are fine and all (because of their clockspeed), the architecture is indeed aging... it's clear that stuff should have been changed ages ago (I mean, come on, the 80386 protected mode bearing very clear signs of 286 pm legacy - even though almost nobody used 286 pm!). Fortunately there has been improvements not only by patchworking the instruction set but also in the core - I don't think anybody is going to disagree that an architectural shift would have been preferable, though.

That's a reason I'm not too fond of AMDs way of introducing 64bit - they extend intels patchwork... imo, it would have been nice that, sine we're moving to a new architecture, to finally do away with the legacy, and start from a clean slate (with the previous architectures in mind, of course - mistakes as well as the good decisions).

With the video cards, I think it's a bit different though. As far as I've been able to figure out, it's not microsoft exclusively designing DirectX, there's a bunch of video card manufacturers involved. While it's still a closed group, it should still leave some room for actual improvements, and not serving the interests of _just_ microsoft. Take a look at the DX10/DX Next article Bruce posted, it does seem like a rather radical step from DX9 - perhaps a step towards the performance of SGI - especially when PCI-X hits the streets, and you can plug in multiple cards in a box.

Only guesses though, this is still a while in the future.

Btw, security problems: have a look at the recent linux hacks. Bug in brk/sbrk allowing read/write ring3 mapping of ring0 memory - ouch. Dunno how long this particular bug has been in the linux kernel, but brk/sbrk are fundamental unix system calls. Smells like programmers not paying enough attention to signed vs. unsigned :)

Btw, NTFS doesn't have a lot to do with DOS and there has been 'dynamic disks' (new partitioning scheme) around since... humm... at least NT5. Also, NTFS is journaling, which wasn't really added to linux until ext3 and ReiserFS (I'm sure other unices have had it for at least as long as NT though - it's not exactly a new concept). Hm, I wonder which *u*x filesystems support unicode and ACLs (I know there's work to integrate ACLs in linux and even the very conservative and stable BSDs).

I think it's going to be interesting to see what happens when the new technologies arrive. The gap between SCSI and IDE is getting smaller, with many SCSI features finally getting incorporated in IDE drives. The high-performance PCI-X bus and shift away from AGP. 64bit CPUs available at more 'human' prices. Very flexible and powerful features being added to 'mortal' graphics cards... they seem to have more features than the 'big iron' graphics systems now, even if lacking in brute fillrate (well, one high-endradeon/geforce card against the multiple pipes in the onyx4).
http://www.hp.com/workstations/risc/visualization/overview.html
I wonder how that hp sv7 performs with the same amount of GPUs as the onyx4 :)
Oh, and as mirno mentioned, PCI-X could bring along some interesting stuff for x86 PCs.

It would be interesting if the flexibility of DX9 (or even better, upcoming DX10) could be combined with the fillrate of the Onyx4 - which unfortunately only supports GL1.4, with some 2.0 stuff retro-fitted as vendor extensions (I guess that's what they must have done, since the PDF mentions GL shaders - not available until GL2.0) - http://www.sgi.com/pdfs/3501.pdf
Posted on 2003-12-05 20:00:51 by f0dder
f0dder,

What will be interesting is to see which way the mop flops with the shift to 64 bit processors in the x86 arena. I don't personally mind the AMD approach but it will depend if they can lead the market here as I doubt Intel will co-operate with anything like a compatible system.

An enhanced PCI system would go a log way to solving some of the major bottlenecks in data transfer. AGP seems to have been another fudge that worked OK for a while but will run out of puff.

Just getting a larger data pathway for the main bus would open the gates for some far higher performance stuff and this would be in things like HDD controllers as well as upping the performance of video output.

What will be interesting with the video card manufacturers is whether they opt for hardware directX or whether they opt for a lower level approach that can be used by any other system for image maniplation. The latter woud be a lot more useful but probably not within Microsoft's marketting strategy.

I guess there is room for some very big gains as the hardware capacity is mainly dependent on quantity to get economy of scale but this may run foul of the historical incremental improvement for marketing reasons.

There is probably little reason not to build at least a 128 bit bus and video cards with a gig of high speed memory with a decent range of low level functionality built directly into hardware as economy of scale will put it into a viable price range quickly enough. Unfortunately i doubt this will happen as commercial interests need the incremental approach to control further sales.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-05 21:33:35 by hutch--
Dunno how bad AGP really was - games tend to try and avoid texture reloads as much as possible, and keep as much in video memory as possible. The engineering apps I've seen (and seen screenshots of, including some of the high-end SGI and HP systems) don't seem to be using textures, so I think that you have to throw *massive* amounts of geometry before the AGP bus becomes the limiting factor. At least for the games, the change from 4x-8x meant almost nothing. But yeah, AGP was somewhat of a fudge I guess... I think the really interesting thing about PCI-X, apart from being flexible and providing a lot of bandwidth, is that it should be possible to throw a lot of graphics cards in a single system, where (as far as I understand), AGP is limited to one bus per system.

Of course PCI-X will not only be good for graphics, as you mention with HDD controllers, there's a number of devices battling for bandwidth at the moment - throw in a bunch of SATA-150 striped disks and a gigabit ethernet card or two, and you're using a lot of bandwidth already :)

Btw, take a look at the link bruce posted about the upcoming DirectX - it seems very interesting. They're extending the current DX9 and GL2 shader things and making it a lot more flexible - while I'm not really sure what to think of the "virtual video memory idea" (it could be done well, or it could be a thing from hell), the idea that they want to make the shaders even more powerful and add a 'topology processor' seems nice - graphics hardware is becoming very powerful, and able to produce things (or at least very close approximations :)) in realtime, that previously took a long time to render.

Oh, and the shader stuff in DX9 and GL2 isn't even widely used yet - various tech demos and game previews look promising, though. I guess the shader features should also be usable in 'professional' software, if the shading features can be used to approximate material types that usually required raytracing. If you look at http://www.daionet.gr.jp/~masa/rthdribl/, I think the shiny balls and especially the skull looks like something that would previously have taken a while to render on a high-end PC - while it can now be done at ~24 fps. So it's not all just fun and games :)

The Onyx4 achieves its high fillrate with multiple "graphics pipes" - I assume a 'pipe' is the same as a 'gpu'. Perhaps some proprietary chip, I didn't see the name mentioned. HP uses NVidia Quadro FX cards for their sv7 - and again, lots of them. The "big brother" Quadro cards are closely related to the gaming-oriented normal brand of GeForce cards - at least up to GeForce2, you were able to turn a regular GF2 into a Quadro (although not as fast as the real deal), by putting a different bios on the card.

I don't think that DirectX driving the video card hardware is a bad thing as such, as microsoft doesn't form the DX specs all by itself. And some standardization is very valuable both to the end-users and the application programmers. I remember the early days, where you had the 3dfx glide api, some proprietary S3 stuff for their ViRGE cards, and OpenGL (which was only partially supported by the available consumer hardware, so programmers had to use a limited subset of functionality, the so-called miniports). Those were rather chaotic times!

OpenGL could perhaps have been the standard of choice instead of Direct3D, and it would have worked okay. The problem with this, however, is the (lack of) speed of the OpenGL committee - which forces the individual hardware vendors to come up with their own extensions to the standard, which again fragments the market - you code for a specific card, or spend more development time doing multiple codepaths. End-user is screwed.

DirectX sets a number of requirements for DX-whatever-version compatibility, and the hardware vendors then have to fulfill these requirements. For some reason, the process of establishing the featurelist of a DX version seems faster than the OpenGL committee. The advantage of this is that programmers can (mostly) decide on a directx and vertex/pixel shader version to program for, and the end users then have to shop for a card the delivers the necessary performance. Instead of speculating which API their games/applications use, and what card has support for that API. (In a GL world, this would have been which extensions are required, and what the cards+drivers support).

It's interesting to see what the new GPUs are able to do, and I'm looking forward to see the capabilities of the current GPUs used - as of now, the shaders haven't really been used for anything fancy in games that I've come across, they're basically all boring "let's see how many textyred polygons we can throw on screen" - not "let's give the user some stunning lighting effects".

Heck, with the advances the GPUs have taken, you don't even need raytracing to make stuff look good anymore :)

Posted on 2003-12-05 22:50:13 by f0dder
f0dder, PCI-X is a different standard to PCI Express, although I know what you meant :grin:

AGP is a dirty great hack. At 1X, most of the cards had no physical difference to their PCI counterparts, other than clocking at 66MHz. 2X and 4X introduced some nice features (SBA type things), but were basically strobing the clocks to 2 and 4 times the speed. AGP 3.0 (4X and 8X) are nice from a technical perspective, running at the rate 8X does poses some electrical problems, so they have to dynamically invert the bus. It is neat from a purely technical perspective, but means nothing to the end user!
The problem with the bus is that the faster you strobe it, the longer the turn around times are. So while the peak bandwidth goes up, the real world sees less and less of it.

Hutch--
I bet every one of the architects at nVidia and ATI would love to have 128bit IO busses, but the chips they make are probably already IO pin bound (or close to it). Just adding an extra 96 pins to the package would make them quite a bit bigger. Its a purely cost thing in the consumer space.

The 1Gb of RAM on the cards isn't too far off though, the Wildcat VP990 comes with 512Mb and thats based on an 18 month old P10 chip. The next Wildcats should at least match that, and if the market demand is there a 1Gb part could well hit the shelves.

Mirno
Posted on 2003-12-06 04:21:52 by Mirno

f0dder, PCI-X is a different standard to PCI Express, although I know what you meant

Yeah, I keep mixing the two and forgetting which is the nice and new one :)


At 1X, most of the cards had no physical difference to their PCI counterparts, other than clocking at 66MHz

Heh yeah, I remember some of the AGP cards being sold that were basically just PCI cards with a different form factor, supporting none of the AGP stuff.
Posted on 2003-12-06 08:18:58 by f0dder
Mirno,

I can sympathise witht the technical problems of chip availability and cost factors but I always keep in mind the 8088 that was never going to use a whole megabyte of ram.

When the next standard for video ends up being set, the next generation of software and new hardware will have problems running 3d holagraphic representations of popular games like the battle of Britain, flying though the grand canyon or Julias Caesar's conquest of Palestine.

I mean, can you imagine being able to run a full scale battle with less than 300,000 soldiers because the video card cannot keep up with it. The soldiers start to get a bit jerky after the first 250,000 or so. What I keep in mind is the virtue of playing leap frog with capacity so that it does not go out of date too fast. I have found if I build the biggest box I can afford, I get a few more years out of it before its too small or too slow.

Still, I doubt that the bean counters in big corporations like Microsoft will ever go for anything that is actually useful, they are there for the money, not the industry and the old formula of incremental change has worked for them for as long time.
Posted on 2003-12-06 08:30:30 by hutch--
I mean, can you imagine being able to run a full scale battle with less than 300,000 soldiers because the video card cannot keep up with it. The soldiers start to get a bit jerky after the first 250,000 or so.


We use LOD and imposters for that, regardless of the hardware scale.

Still, I doubt that the bean counters in big corporations like Microsoft will ever go for anything that is actually useful, they are there for the money, not the industry and the old formula of incremental change has worked for them for as long time.


Actually, Microsoft has been a pioneer in the art of MAKING their products useful. They look what companies don't know that they need, then make them need it, and the only one able to give it to them is MS.
Also, every company is there for the money. But money doesn't just roll in for being a company.
You have to offer products that other companies will want to buy or use. DirectX is an excellent example here. Virtually all PC games use DirectX now. Microsoft made them need it. It is now at the cutting edge of realtime 3d technology. It is therefore extremely useful.
DirectX bundles and focuses the power of the hardware developers.
Posted on 2003-12-06 08:47:19 by Bruce-li
Posted on 2003-12-09 09:30:59 by SpooK
that means intellect is spilling out of it, right? :P


;)
Posted on 2003-12-09 10:02:38 by Hiroshimator
Here are some professional cards benchmarked:
http://www.digit-life.com/articles2/profcards/svp71-11-2003.html

Including the Quadro FX2000, which is also used in HP's sv7 visualization system. Looks like HP made a great choice.
This is NVIDIA's playground... 'Professional' OpenGL applications, no shaders.
Look how the 'game' cards from NVIDIA and ATi trample all over the 'professional' 3dlabs stuff.
Posted on 2003-12-09 12:05:24 by Bruce-li
Comparing an 18 month old (not even the respin VP880) mid-range card against a set of brand-spanking new cards.
All this from a man who loves his evidence.

Mirno
Posted on 2003-12-09 16:07:10 by Mirno
The point was more that good 'game' stuff can easily beat mediocre 'professional' stuff.
The boundary is thin, if it exists at all.
But read into it whatever you think you should. It says more about you than about me (does this have something to do with the subpixel-stuff? :))
Posted on 2003-12-09 16:11:37 by Bruce-li
It says a lot about you really:
You use deliberatly quote the word professional to indicate a poor quality (using the quotation marks to question the term).
This is leading, and in combination with a bit of bad evidence really does say something about you.

As a person who is so pedantic about your evidence, and so confident in your own opinion, you when presenting evidence can only bring forward this? There are better reviews out there which will say the same thing (the quadro 9XX card consistantly outperforms all of the VP series), and present a more balanced opinion, reviewing more cards (VP880, VP990, Wildcat 7210 and 7110).

I can say that I'm not overly bothered about the sub-pixel thing because I belive the T&L (where vertices are placed in 3D space) is way before the rasteriser in the pipeline, and so can't see how the sub-pixel depth could affect this. If you know better I would certainly like to be enlighted.

Mirno
Posted on 2003-12-09 16:38:07 by Mirno
You use deliberatly quote the word professional to indicate a poor quality (using the quotation marks to question the term).


In the spirit of this thread, I thought it would be obvious what I meant? Since hutch-- claimed that there was a world of difference between 'game' stuff and 'professional' stuff. Apparently I was wrong.

As a person who is so pedantic about your evidence, and so confident in your own opinion, you when presenting evidence can only bring forward this? There are better reviews out there which will say the same thing (the quadro 9XX card consistantly outperforms all of the VP series), and present a more balanced opinion, reviewing more cards (VP880, VP990, Wildcat 7210 and 7110).


I just happened to run into this thing. I wasn't actually looking for it. Also, I don't care about the 3dlabs card at all. The FX2000 was what caught my attention, since it is also used by the HP.
If you want to show off the 3dlabs, feel free to post such a review, I don't care. I didn't want to post this as 'evidence' anyway, more as 'background information'. The point is more that these are all cards in a PC, powered by two x86 processors. The difference between the different cards is not that relevant. It was just a loose remark that was supposed to indicate the faded boundary between 'game' and 'professional' stuff.

I can say that I'm not overly bothered about the sub-pixel thing because I belive the T&L (where vertices are placed in 3D space) is way before the rasteriser in the pipeline, and so can't see how the sub-pixel depth could affect this. If you know better I would certainly like to be enlighted.


I already did, check my earlier reply to your post. If I need to clarify, please be more specific.

Now, instead of explaining the things I posted, let's explain why you post personal attacks at me, that are completely uncalled for... Why do you put in so much effort to try and make me look bad?
Posted on 2003-12-09 16:46:08 by Bruce-li
Hutch refered to SGI Altix boxes as professional, not the x86 cards, so no the distinction wasn't obvious.

I cannot show a benchmark which shows the 3DLabs cards in a better light, the comment
There are better reviews out there which will say the same thing (the quadro 9XX card consistantly outperforms all of the VP series), and present a more balanced opinion, reviewing more cards (VP880, VP990, Wildcat 7210 and 7110).

really doesn't leave much room for you to gather that I think I can.

I reread your post, and really, can say with absolute certainty that the Wildcat team do use more mantissa bits than those in a standard 32bit float, and then down-sample to 32 bits at the end of the T&L calculations. I don't think they'd do it for the fun of it, and I stand by the fact that the placement of a vertex has nothing to do with the rasterisation. Where a line is rasterised is dependant on where the vertex is calculated to be, but not the other way around.

Does that really count as a personal attack?
It is poor evidence, and you know it. It doesn't even fall into relevency here as the argument you had with hutch was x86 vs. "big computing", and all cards there fall into the x86 catagory.

I do like the wounded animal look on you though.

Mirno
Posted on 2003-12-09 17:08:33 by Mirno
Hutch refered to SGI Altix boxes as professional, not the x86 cards, so no the distinction wasn't obvious.


Hutch-- referred to many SGI boxes, and many other times he implied the distinction between PCs and 'big iron' and military stuff, and such.

I reread your post, and really, can say with absolute certainty that the Wildcat team do use more mantissa bits than those in a standard 32bit float, and then down-sample to 32 bits at the end of the T&L calculations. I don't think they'd do it for the fun of it, and I stand by the fact that the placement of a vertex has nothing to do with the rasterisation. Where a line is rasterised is dependant on where the vertex is calculated to be, but not the other way around.


Clearly you haven't understood my original post then. You don't render vertices. You only rasterize lines or triangles which are spanned by vertices.
And yes, I know 3dlabs use an insane amount of precision, they also use 10 subpixel bits. Does it really matter though? It might, in pathological cases. Then again, is the accuracy that 'normal' cards use that bad that vertices are actually a pixel off? No, never. Either you meant something else than you said, or you don't know what you meant at all, but just repeat something that you read somewhere, possibly you misunderstood.

Does that really count as a personal attack?
It is poor evidence, and you know it. It doesn't even fall into relevency here as the argument you had with hutch was x86 vs. "big computing", and all cards there fall into the x86 catagory.


Excuse me, but I have specifically referred to the HP sv7 multiple times, and if you actually bothered to follow the link, you would see that it uses an array of Quadro FX2000 chips... Yes indeed... These "x86 category" chips.
It gets worse... by using these "x86 category" chips, it actually beats the SGI Onyx4 "big iron".
And I dislike the term "evidence", when it is just "background info". I didn't present it as evidence, and I don't see why you insist to interpret it as such. Then again, it seems to suit your hidden agenda well.

I do like the wounded animal look on you though.


If this isn't a personal attack, I don't know what is.
Anyway, since I never gave any faulty info, I don't see why you have to make it personal, just because you read something into it that I never meant. I didn't write that article you know, I didn't pick the cards either. And what I said was not in conflict with any facts either, as far as I know. So please, drop the 3dlabs fanboy act, and stop the "evidence" crap. This situation is in no way comparable with the one hutch-- was in.

And why do you want to start a war against me? It might be more constructive to discuss T&L and rasterization together, we might learn something.
Posted on 2003-12-09 17:22:46 by Bruce-li
Bruce, why don't you look at a software renderer to see how much precision is needed for rasterization? Certainly, if the input data is confined to a certain range then there is a smaller chance of a problem (as demonstrated in the code linked above).
Posted on 2003-12-09 18:04:14 by bitRAKE
Why are you telling me this?
Mirno is the one not understanding the issue.
I've written plenty of software renderers myself, over the years, I'm intimately familiar with them.

By the way, that engine has a horrible pitchbug, it seems.. Can't get anything reasonable out of it on my system, the scanlines are totally messed up.
Can't even see if it has subpixel-correction or not.
Posted on 2003-12-09 18:09:00 by Bruce-li
This is why Bruce:

And yes, I know 3dlabs use an insane amount of precision, they also use 10 subpixel bits. Does it really matter though? It might, in pathological cases.
"Insane" amount? No, they just like to insure all possible input values map to correct output. DX is designed to exploit the reduced percision of the mainstream cheaper cards, whereas OpenGL is not. This is good or bad depending on your view point.
Posted on 2003-12-09 18:26:18 by bitRAKE