they will have to adapt or die


Yes. I will continue to code in asm, if that is possible, and if the compilers' level can be reached. My apps need the greatest cpu performance one could get, so asm will probably be my stream for a long long time. Fortunately, my area of interest and work is music composition, and absolutely every musician on Earth will say they prefer software music to hardware one. So, I'm not dependant on hardware accelerration. I love x86 as I can do almost anything I want with it. I'm not sure how will the transition to 64-bit happen, I wasn't there (or coding or at least interested) when cpus went 4 to 8bit, then 8 to 16 and then 16 to 32-bit. But I see people have handled those things. Kind of reminds me The Matrix 2, "we are still here, and we will be".
But, IMHO, not enough people will need more than 32-bit. How many apps do currently need to use 64-bit integers, and find mmx not enough? The FPU is good enough, too. Only servers and exceptional applications need to handle 64-bit natively.
If a company creates something, that doesn't mean everybody will use it. Or at least that we'll start using it right away.
Posted on 2003-11-28 01:55:09 by Ultrano
I cannot really help you with the reference for arcade gaming hardware in the early 90s because among other things it was proprietry hardware and software that was never published and involved big bux to produce.


If you can produce not a single shred of evidence that such a machine ever existed, I see no reason to believe it did, so you may as well give it up. If this was an arcade machine, surely it should have been publicly accessible in... arcade halls? And it should have had some software on it, which should have at least had a title?
And there must have been a company behind it, with a name?
Anyway, the monitor setup you describe, reminds me of an old Sega machine that ran a Formula 1 game, which you could play with multiple players at the same time. Was that Virtua Racing?
I surely hope you don't mean that one, since iirc, it was just flatshaded 3d. That would be this one:
http://www.shinforce.com/32x/reviews/VirtuaRacingDlx.htm
http://yesterdayland.elsewhere.org/popopedia/shows/arcade/ag1166.php

The idea that an "API" is not or should not be platform dependent assumes cross platform code development which usually means what you can produce in libraries for a C compiler yet it is less than a secret that fast code on most platforms is written in native assembler.


There is a difference between portable sourcecode, and APIs implemented on multiple architectures... There is no reason why you cannot use assembly in an implementation, and still remain true to the API, and remain compatible with other platforms. OpenGL drivers do this, for example.

OpenGL is a good example where x86 hardware is not fast enough.


How exactly is that a good example then? Can you elaborate? This seems like a rather loose statement, not supported by any facts, yet portrayed as a truth.

A test of something like screen resolution x colour depth looks something like 1024 x 768 x 4 bytes multiplied by the frame rate per second.


This is completely irrelevant for hardware accelerators... You upload the geometry once, then you simply send some commands to draw a frame...

but start working from a real world height map, with or without texturing, process the information to make it recede in the distance and you are starting to run out of grunt on a current PV 10 gig PC.


I think you are confused between what a workstation can do, and what a cluster can do... Throw enough geometry at it, and you can slow any system to a crawl, ofcourse. However, I have seen some rather nice high-detail terrain engines running on simple PCs... How about this one: http://web.interware.hu/bandi/ranger.html

For a game it was fine but for critical high speed simulation, it was some powers off the pace in technical terms.


Again you sound confused... You compare an old DX7 game on a simple PIII with some big emulation cluster?
Both the hardware and the software are in a completely different world. Still this says nothing about whether or not a cluster of PIIIs with DX7 cards would be good enough for that emulation system or not. It probably will be.

The "gee whiz" element of what the next accelerated video card may be able to do does not solve the problem of an ancient architecture in x86 hardware


Actually, it does in a way. For example, x86 is not very good at T&L or rasterizing, so the accelerator takes over these tasks, problem solved, no?
Posted on 2003-11-28 02:44:00 by Bruce-li



Yes. I will continue to code in asm, if that is possible, and if the compilers' level can be reached. My apps need the greatest cpu performance one could get, so asm will probably be my stream for a long long time. Fortunately, my area of interest and work is music composition, and absolutely every musician on Earth will say they prefer software music to hardware one. So, I'm not dependant on hardware accelerration. I love x86 as I can do almost anything I want with it. I'm not sure how will the transition to 64-bit happen, I wasn't there (or coding or at least interested) when cpus went 4 to 8bit, then 8 to 16 and then 16 to 32-bit. But I see people have handled those things. Kind of reminds me The Matrix 2, "we are still here, and we will be".
But, IMHO, not enough people will need more than 32-bit. How many apps do currently need to use 64-bit integers, and find mmx not enough? The FPU is good enough, too. Only servers and exceptional applications need to handle 64-bit natively.
If a company creates something, that doesn't mean everybody will use it. Or at least that we'll start using it right away.


that doesn't really matter. It's like Ghz, people think more is better.
Posted on 2003-11-28 02:53:12 by Hiroshimator

I still own somewhere a couple of directX games for about DX7....

Hmm. With my GeForce2MX200, when I installed DX8.1 after the DX7, there was a huge improvement in graphics performance. If you haven't tried it, maybe you should.
x86 is not suitable for very-very high-performance stuff in some explicit areas, but who the heck really needs that?
I also thought some changes should be made, looking at UnrealTournament2003 and it's poor performance on GF3 and 1.2GHz cpu. Then, one day, I saw at one gamers' club UT2 exit with an error like this:

CDXmainApp::Recalcualted->CDXmainframe::dsdsdsdd ... ->CVector::~CVector()

This messabox had so many text in it, that it filled the whole screen (1280x960). This is retail-version. No more comments. I suppose they also use virtual functions, imported from loaded dlls , and a lot of other horrible stuff :rolleyes: .
A friend of mine got GTA3. Runs it perfectly with 333MHz with GF2MX200. The only problem that may occur will be if one wants a game with millions of dynamic objects, that don't get deleted when the camera gets too far from them. If the code gets well-done, it might be no problem. Or it may be a problem for the x86 PC. But it will also be a problem for all other computers, that aren't destined to do exactly these calculations.
OK. So far, games might impose a problem... but here come the accelerators, and they help a lot.
And that's that. No more threats to the cpu. Except for badly written code.
People use computers for things that barely reach 50% cpu usage.



Btw, my dad was an airplane technician, with friends in USA. 10 years ago he told me a little about computers, and then he mentioned that the flight sims his friends use have reached the stunning 4fps. Before that, they have been 2 or so. The realtime ones, with enough detail on cockpit. If I remember correctly.
Posted on 2003-11-28 03:07:27 by Ultrano
The notion that something does not exist if the observer does not experience it is a very old fallacy called solipsism. Look for the name of Bishop Berkeley to learn the error in your logic.

Its foolish to assume that I am bound to prove anything to someone who joins a forum and starts a debate of this type with their first post, complete with a few smartarse wisecracks.

For high end hardware, a simple distinction will help alleviate your naivity here, if its made on silicon, its OLD technology. If its Von Neuman logic its OLD technology.

Would you use an x86 PC running Windows as the guidance system for a low altitude cruise type missile, torrain mapping from satelite data etc ... No, only in your wildest dreams as it is just too slow and this is without having to display the data.

The assumption that something as trivial as computer gaming has much to contribute to an area that has been well thought out in very high speed hardware would be hard to get off the ground with much less than fantasy.

I actually don't take on the task of doing the footwork for people who don't know where to find the data they are after and while recommending a google search could be useful with some things, anything that is really smart or fast is not waiting out there to be discovered on the internet as its usually commercially proprietry which means SECRET or its military which means its REALLY secret.

A few hints may help you a little, look for NON silicon based wafers, developments in memory technology that is powers faster than the best in current PCs etc ...

Get the idea that there is a bigger world out there than the trusty old x86 architecture and you will have some idea of what current technology can do rather than incremental tweaks to 25 year old architecture.

Just as an aside, back in about 1995 a dealer tried to hard sell me a Silicon Graphics box that was truly exciting at the time. He wanted about 5 grand for it but I thought, WTF am I going to do with a thing like that when I was earning a living at the time writing 16 bit Windows software. :tongue:

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-28 05:08:42 by hutch--
"Oh, I can't really add anything of quantitive value, so let's throw in some philosophy and other completely unrelated bs" - we've seen that before, haven't we? :rolleyes:
Posted on 2003-11-28 05:28:20 by f0dder
Originally posted by Ultrano
But, IMHO, not enough people will need more than 32-bit. How many apps do currently need to use 64-bit integers, and find mmx not enough? The FPU is good enough, too. Only servers and exceptional applications need to handle 64-bit natively.


Random thoughts...

- Many people commented just like you did when PC moved from 8 bit to 16 bit environment and then again when moving from 16 bit to 32 bit environment.

- I remember the days when VAX was considered a big server. Well, not "mainframe" big, but most PC users used 8-bit or 16 bit machines. Compared to current x86 machines, VAX now looks like a joke. :)

- Personally, I had big gaps in asm programming when moving from Apple II to 8088 (OK, they are not the same, so I think that was natural) and when moving from 16 bit to 32 bit environment. I guess I will program in asm in 64 bit env in the end, but it will take a long time before I do it again. :(

- FPU has never been good enough for me, and it will never be. But, that is because my main interest is numerical analysis. It is blazingly fast compared to old days, but the size of my computation grows, too. It seems that I always manage to make my problem big enough to take about a week to complete the computation. ;)
Posted on 2003-11-28 05:37:31 by Starless
Bruce, there are cases in ViewPerf where the current crop of "professional" gaming cards from both ATI and nVidia do not do what they are supposed to.

The floating point pipeline you refer to is the colour component, and this is purely for blending purposes (still being fed out of a 10 bit DAC at the end of it). Do not think for one moment that ATI or nVidia will waste millions of gates, and lower their clocks to implement greater than 32 bit floating point precision. Each bit makes the floating point component much more expensive. FP operators tend to be the longest path in their relative bit of silicon anyway, and heavily pipelined. Making them even bigger costs gates and possibly silicon speed. Neither will sacrifice both cost and pure performance for that, their primary markets are too tough for that.

which features a 4-pipeline high performance vertex shader array.

To be honest, unless you've seen specs on their "high performance vertex shader array" claiming to know its internal precision is rather silly. The quote sounds like someone who's read the marketing on their website.

The placement of vertices has nothing to do with sub-pixel depth, and a whole lot more to do with the transform and lighting (geometry) end of the pipeline. A vertex is a point (not a GL point) in space, whether it is placed at coordinate x.y is dependant on whether it is transformed, and how accurate that transformation is. Just try adding very small numbers to very big numbers then multiplying them. The smaller the value, the more likely it is to fall outside the range of your mantissa, when it does, the resultant multiply makes the small value into a much bigger one, and misplaces the vertex. This can happen all the time in 3D graphics, and if you've got lower vertex precision than your competitor, you are more likely to get it wrong! It is of course swings and roundabouts, the increased precision makes your chip larger (bigger FP components, bigger data busses running around the chip etc.).

While nVidia and ATI would like to claim to have won the professional space, killing off the last real 3D professional outfit, when things really matter, and it has to be done right, people still fork out thousands of dollars for Wildcats, even when they are beaten by the Quadro because they get precision. The Quadro is a gaming card with drivers that do things the propper way (rather than a rough approximation which is faster).

Pixel shaders do not procedurally generate textures on the fly. ATI are not moving from texturing performance to PS, the two work in tandem. If there is no texture you're performing pixel operations on flat shaded polys, and that isn't very interesting at all!
nVidia's slow DX9 pixel performace comes from their poor floating point performance, pixel shaders have always been arithmetic, they moved from integer to floating point in the DX8 to DX9 transition (of course they also added more "instructions", but they don't need to be natively performed, we already know nVidia use a compiler to convert from DX to propriatry internal format).

The point at which a wireframe becomes a set of pixels is by necesity much further down the pipeline than you seem to think. Geometry must be processed before it can become a pixel, and if you render wireframe then there can be no culling. "Hyper-Z" and similar are useless, every vertex must be passed on down the pipeline. No early rejection can be done, every point, every line, every triangle must be drawn. The number of verticies can vastly outnumber the mere 1.92 million pixels on screen! Even if you render at 12x AA you come far short of the number of vertices people really want to model.
Even then who says all of them are on screen? As long as they are inside the guardband they will not be clipped out, they won't be rendered of course but they will still need to be processed, in case the transformation performed brings them inside the view space.

SGI's hardware is "more dedicated" than PC hardware, arcade hardware is "more dedicated" than PC hardware. They perform only a subset of the tasks a modern desktop PC must perform, so they push the hardware in a specific direction. Look at the Playstation 2, the hardware need only be run at 100Hz (PAL 100Hz TVs), so they don't need to put super high DACs in. The resolutions they need to run are much lower, they save bandwidth, and reduce data busses accordingly. The tasks they need to perform in 2D are much more limited, the quality of their blitters can be reduced saving gates.
When you narrow the field in which you are working the hardware can be made to do it so much better than an all purpose solution. Modern PC hardware must still deal with VGA (even if it is a sort of emulation), must provide 2D functionality, have a range of resolutions and colour depths.... All that leads them to reduced performance.

Anyway, this is my final message on this particular troll-pic.

Mirno
Posted on 2003-11-28 06:17:41 by Mirno
Would you use an x86 PC running Windows as the guidance system for a low altitude cruise type missile, torrain mapping from satelite data etc ... No, only in your wildest dreams as it is just too slow and this is without having to display the data.


Again, you bring in dedicated hardware vs general purpose hardware. That is not the point here.
I believe that the original point was that DirectX and/or x86 would not be suitable for 'professional' work... I have yet to hear some solid arguments for this case, even though there have been FACTS against it already (such as Pixar actually USING x86).
Or that OpenGL on PCs would be vastly different from OpenGL on SGI systems or whatnot? Why exactly? How exactly?

The assumption that something as trivial as computer gaming has much to contribute to an area that has been well thought out in very high speed hardware would be hard to get off the ground with much less than fantasy.


Nobody ever said that. FACTS however dictate that both ATi and NVIDIA (the two leading graphics card manufacturers at the time of writing) base their professional line of cards on the same GPU as they base their 'game' cards on.
FACTS also indicate that at my university, we had a molecular simulation system, which used a database of molecules on a server, and an SGI visualization client.
The expensive dedicated server box has already been replaced by a P4 system, because it is simply faster these days, and much smaller and cheaper than the previous 'exotic' system.
The visualization client was still in use, but that's more a case of "if it ain't broke, don't fix it", since it would have trouble keeping up with modern 'game' cards.
Evolution happens very quickly in this field, even faster than in the field of CPU technology. This is partly due to the fact that rasterizing can be highly paralleled.
You could defend the SGI-box, and say "yes, but this box is 5 years old already"... True, but the thing is, SGI did NOT grow as quickly as the 'game' card market. Their latest systems are NOT that much better than this 5 year old one.
Which means there is currently little reason to buy an expensive SGI system, if you could just use an x86 system with a more mainstream accelerator for much less.

Get the idea that there is a bigger world out there than the trusty old x86 architecture and you will have some idea of what current technology can do rather than incremental tweaks to 25 year old architecture.


Erm, you don't have to convince me about non-x86 architecture. I've actually WORKED with it (universities tend to have budgets for exotic hardware for research). Thing is, I don't need to work with it anymore, PCs cope pretty well, most of the time. And building PC clusters is really cheap and effective.

anything that is really smart or fast is not waiting out there to be discovered on the internet as its usually commercially proprietry which means SECRET or its military which means its REALLY secret.


Well isn't that a paradox... If I can't know about it, then how can you know about it?

The rest of your comments just try to cover up your lack of knowledge and/or data, it seems. Patronizing someone is not a good way to convince them. Facts work a lot better.
I mean, talking about non-silicon stuff is just rubbish for example... All the graphics systems are still built on silicon. While this may not be an optimal solution, it is a practical one, and we don't need to discuss how well graphics hardware could perform in theory.
Posted on 2003-11-28 06:32:39 by Bruce-li
Perhaps your grasp of English is poor my friend,


I believe that the original point was that DirectX and/or x86 would not be suitable for 'professional' work


What I originally asserted was that x86 hadware and directX is not a model for multiport software and hardware design as there is major differences in the performance of alternative hardware. Further that x86 is not at the top end of the performace scale, particularly in high end image manipulation.

While there is litle doubt that directX works OK in games on reasonably recent PC hardware and video, making the deduction that it is a model for far more powerful hardware is naive.

The idea of what professional work is would be the point of debate but I doubt that gaming is the model for professional work. Many other things come to mind with stuff like CAD and similar but as I imagine you would undersand, "professional" work is usually related to money and this means high performance arcade games with large cost hardware, far later than the stuff that I have seen over 10 years ago and of course military hardware.

Now my comments on military hardware come from hearing bits and pieces over many years and while this stuff will stay secret for a long time to come, there is a form of class theory that helps to narrow the range of the tasks involved.

Effectively what you do is apply the little information you already have and model it to the task at hand. We know for example that global positioning sattelites is well developed and there is already a distinction between domestic and military information quality. Next you apply the requirement of no radar guidance as it makes the missile a target so you narrow the class down a bit more.

Then you apply the sattelite global mapping that has been around since Landsat and what you end up with is a very high precision contour mapping of the torrain that the missile must fly over.

Convert this into the terminology you are familiar with and you end up with something like a height map that games use but far larger and with a far higher level of precision. This will not tell you how to build the existing missile guidance systems but it does restrict the range of classes into the managable. Games may look good but if you want a missile to actually hit the target area, you need the far higher precision available with high end hardware, not antique domestic achitecture.

This is where x86 hardware apart from many other problems of power supply and heat does not have the legs in the processing area. It does games well enough these days but its architecture is too old and suffers far too many problems to compete with high end usage.

Now as far as the comments on wafer material, alternate materials have been around for years and many are capable of far faster design than silicon based chips. Silicon is cheap and reasonably reliable these days and this is why its commercially viable, even though its well over 20 year old technology.

If what you are arguing is that you get more bang for your buck with cheap x86 hardware, I would agree with you but if your assertion about the performance of x86 hardware intimates that it is competitive with high end dedicated hardware, we will continue to differ.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-28 07:53:28 by hutch--
Bruce, there are cases in ViewPerf where the current crop of "professional" gaming cards from both ATI and nVidia do not do what they are supposed to.


Well, can you be more specific? Exactly WHAT do they not render correctly? I doubt it's vertices being 1 pixel off, as you implied earlier...
And it is also a common-known fact that not all hardware renderers produce the exact same image. This is due to differences in things such as pixel-center, mipmap LOD-bias, supersampling vs multisampling, pattern used for anisotropic sampling, etc, etc.

The floating point pipeline you refer to is the colour component, and this is purely for blending purposes (still being fed out of a 10 bit DAC at the end of it). Do not think for one moment that ATI or nVidia will waste millions of gates, and lower their clocks to implement greater than 32 bit floating point precision.


Considering the fact that the 'competition' uses only 12 bit integer components, I don't see why they should waste more gates at this time, no.
As it stands, the SGI systems simply can't DO a lot of things that the ATi cards can. How about HDR rendering/post-processing for example?

To be honest, unless you've seen specs on their "high performance vertex shader array" claiming to know its internal precision is rather silly. The quote sounds like someone who's read the marketing on their website.


DirectX has minimum specs that the hardware needs to comply to, so yes, I know the internal precision, that is, unless it exceeds the DirectX specs, but that is only an advantage then.

The placement of vertices has nothing to do with sub-pixel depth


It does actually... Or at least, the subpixel-depth of the vertices determines how accurately a line or triangle between these vertices can be rasterized. And since you don't see the vertices themselves, but only the rasterized lines or triangles... Ah well, but apparently you had an alternative story, which we will get to now...

A vertex is a point (not a GL point) in space, whether it is placed at coordinate x.y is dependant on whether it is transformed, and how accurate that transformation is. Just try adding very small numbers to very big numbers then multiplying them. The smaller the value, the more likely it is to fall outside the range of your mantissa, when it does, the resultant multiply makes the small value into a much bigger one, and misplaces the vertex. This can happen all the time in 3D graphics, and if you've got lower vertex precision than your competitor, you are more likely to get it wrong! It is of course swings and roundabouts, the increased precision makes your chip larger (bigger FP components, bigger data busses running around the chip etc.).


I don't think I can agree here.... this is a result of how the matrix itself is built. The matrix is not built by the 3d hardware, it is built by the CPU. Throwing more precision at it, will not solve the problem, it will merely shift it. The proper solution is to 'stabilize' the matrix before sending it to the hardware, so that the straightforward multiply-operations are stable. 32 bit floating point should be plenty of accuracy for that, sin ce it gives you 24 bit mantissa, so 2^24 discrete points in space, for every component. Considering that most screens are in the range of 2^10 resolution, you'd have to have VERY small factors to even get it unstable, and these small factors could never be visualized anyway. I doubt that SGI just throws some more hardware at it, to not solve the problem anyway.

people still fork out thousands of dollars for Wildcats


That's not the point though, These are still AGP cards, often used in x86 PCs. I believe the original point was more that x86 could not be used professionally, even if you have the latest, most advanced graphics hardware.

Pixel shaders do not procedurally generate textures on the fly. ATI are not moving from texturing performance to PS, the two work in tandem. If there is no texture you're performing pixel operations on flat shaded polys, and that isn't very interesting at all!


Sounds like you've never actually used any of this hardware, else you'd know what I was talking about. Textures can be colourmaps yes, these will probably used until the end of time. But textures can also contain other data... some of this data can indeed be generated arithmetically, on the fly. Think of a renormalization cubemap for example. You can generate high-order Perlin noise from some base-textures aswell, or sine-based bumpmaps or such...
So instead of using precalced textures as lookup maps for arithmetic functions, pixelshaders can now take over.
By the way, without texture, you can still do gouraud, phong, blinn, or other fancy shading methods (with per-vertex colours even, whee!). So you're not stuck to flatshaded polys at all.

The point at which a wireframe becomes a set of pixels is by necesity much further down the pipeline than you seem to think. Geometry must be processed before it can become a pixel, and if you render wireframe then there can be no culling. "Hyper-Z" and similar are useless, every vertex must be passed on down the pipeline. No early rejection can be done, every point, every line, every triangle must be drawn. The number of verticies can vastly outnumber the mere 1.92 million pixels on screen! Even if you render at 12x AA you come far short of the number of vertices people really want to model.
Even then who says all of them are on screen? As long as they are inside the guardband they will not be clipped out, they won't be rendered of course but they will still need to be processed, in case the transformation performed brings them inside the view space.


I never said the vertices did not have to be passed down the pipeline. Also, there can still be backface and viewport culling on wireframes.
Pixels inside the guadband won't be processed, they will be scissored, so no time is spent on them. And if you use hardware T&L, they will never reach the guardband, since the lines can be clipped to the viewport before reaching the rasterizer.
Again, it does not sound like you really know what you are talking about.
Besides, if you take a look here: http://www.beyond3d.com/forum/viewtopic.php?t=9068&postdays=0&postorder=asc&highlight=vertex%20processing%20speed%20radeon%20fx&start=40, you can see some benchmarks that test the throughput of the vertex pipeline. With ambient light, they can get up to 84 million(!) polys per second...
That is 84*3 = 252 million vertices per second.
So I don't think you'll be getting in trouble with rendering wireframes quickly.

The tasks they need to perform in 2D are much more limited, the quality of their blitters can be reduced saving gates.
When you narrow the field in which you are working the hardware can be made to do it so much better than an all purpose solution. Modern PC hardware must still deal with VGA (even if it is a sort of emulation), must provide 2D functionality, have a range of resolutions and colour depths.... All that leads them to reduced performance.


I don't think 'blitters' as such even exist anymore. A textured quad can do the same job, without requiring any extra silicon. All in all, in theory the performance may be reduced, but in practice, these cards manage to outdo SGI nicely in terms of speed and features.
Posted on 2003-11-28 07:56:10 by Bruce-li

"Oh, I can't really add anything of quantitive value, so let's throw in some philosophy and other completely unrelated bs" - we've seen that before, haven't we?

Its not what a man puts in his mouth that makes him unclean but what comes out of it. :grin:

Mirno,

Thanks for the technical data, its interesting stuff, even if posted in a troll.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-28 08:17:56 by hutch--
young man, there's no need to feel down.
I said, young man, pick yourself off the ground.
I said, young man, 'cause you're in a new town
there's no need to be unhappy.

young man, there's a place you can go.
I said, young man, when you're short on your dough.
you can stay there, and I'm sure you will find
many ways to have a good time.

it's fun to stay at the y-m-c-a.
it's fun to stay at the y-m-c-a.

they have everything for you men to enjoy,
you can hang out with all the boys ...

it's fun to stay at the y-m-c-a.
it's fun to stay at the y-m-c-a.

Posted on 2003-11-28 08:33:15 by Hiroshimator
What I originally asserted was that x86 hadware and directX is not a model for multiport software and hardware design as there is major differences in the performance of alternative hardware. Further that x86 is not at the top end of the performace scale, particularly in high end image manipulation.


Yes you've said that a few times now, but you've never quantified this... What are these 'major' differences then? And even though x86 may not be AT the top, it surely is reasonably near it, and will most probably suffice in most cases anyway.
In fact, we used to have special HP Apollo workstations here for Autocad and Pro-Engineer labcourses... Special labs with expensive special hardware and software.
Guess what we do now? We now use the standard Windows/linux labs with Windows-versions of the same software, which is much faster than the old lab was, and much cheaper than maintaining special labs, and upgrading special hardware. Besides, I think the HP Apollo series is long gone now.

While there is litle doubt that directX works OK in games on reasonably recent PC hardware and video, making the deduction that it is a model for far more powerful hardware is naive.


Why is it naive exactly? You still haven't explained that... Do you consider only DirectX a bad model, or does this also go for OpenGL? Where exactly are the differences? Could you also give an example of a good model that can be compared against?

Convert this into the terminology you are familiar with and you end up with something like a height map that games use but far larger and with a far higher level of precision. This will not tell you how to build the existing missile guidance systems but it does restrict the range of classes into the managable. Games may look good but if you want a missile to actually hit the target area, you need the far higher precision available with high end hardware, not antique domestic achitecture.


Is it not naive to just chunk a model of the world into every missile anyway? Especially since a missile will never travel anywhere near most of it? I'm sure that's not how it goes. I think it's much more likely that only a small subset of the geometry, near the missile trajectory will actually be processed, and therefore stored.
I think it would also be likely that not every strand of grass is actually modeled... A missile doesn't have to know that much about the terrain... If you'd just use a few convex hulls over objects that it should avoid, I think you'll do quite fine. No need to model every house on every street.
And that is even assuming that such missiles exist. I believe there are simpler ways to get a missile to its destination. Like by remote control, or by using laser guidance, or a camera that can pinpoint the target, and avoid collisions with non-targets.
But I'm sure this is just some more 'information' you've 'heard', which is 'secret', so you can't tell anything more about it.

This is where x86 hardware apart from many other problems of power supply and heat does not have the legs in the processing area. It does games well enough these days but its architecture is too old and suffers far too many problems to compete with high end usage.


Is that so? I assume you are once again not going to elaborate on it?
I agree that regular x86 CPUs are not suitable for embedded uses. Then again, embedded systems always have special low-power CPUs, and there exist low-power x86 CPUs aswell, and these are also used in embedded systems.
As for performance, in general, x86 seems to keep up pretty well with the competition. It's weaker in the floating-point division, but with integer it's right up there.
And the floating-point part is offloaded by the graphics-board anyway... Besides, x86 is much cheaper than the alternatives, so you can easily cluster them up and get the float-performance that way.
Work is being done on an Opteron cluster supercomputer. We'll just have to wait and see how good x86 really is in high-performance...
The recent Virginia Tech G5 cluster may be a nice indication of what is possible with 'cheap' off-the-rack hardware. It is the world's 3rd fastest supercomputer, and it consists of 1100 stock G5 systems, a total of 2200 PowerPC 970 CPUs. Only a fraction of the cost of the other systems in its league.

If what you are arguing is that you get more bang for your buck with cheap x86 hardware, I would agree with you but if your assertion about the performance of x86 hardware intimates that it is competitive with high end dedicated hardware, we will continue to differ.


I argue 3 things that were said earlier:
1) x86 is cheap and can deliver the power required for large-scale professional graphics work, at the cutting edge, as Pixar proves.
2) There was a mention that DirectX would not have what it takes for anything other than games, and then also some ramblings about OpenGL, but nobody ever bothered to explain why, what and how, and I see no reason why not, only reasons pointing in the other direction.
3) In the 70s there were machines that completely outclass what we have on the desktop today, and in the 90s, Japs had some gaming systems that do the same. But not a shred of evidence in that direction, although there was a mention of an early 90s Japanese arcade system, much as the one described, which did only rather unimpressive flatshaded 3d...

So anyway, I asked some questions, and if you people want to convince me, you should answer them. Else I don't see why I, or anyone for that matter, should believe you.

edit, forgot this one:

but I doubt that gaming is the model for professional work.


This is an interesting thought actually... So you consider a PC a gaming system then?
That's funny because originally it was meant to be for wordprocessing and other simple office jobs.
I am sure plenty of people thought that these machines would not be the model for gaming either.
But that's evolution I suppose. Apparently office machines have somehow evolved into machines powerful enough to play games on... And the next step seems to be that games are becoming so realistic that we are approaching realtime interactive virtual reality systems.
So just as people back then had to stop thinking about PCs as just office machines, and realized the true potential of the PC, I think we will have to stop thinking about games as just games, and start thinking of them as some kind of interactive virtual worlds, because that is what they are becoming.
Posted on 2003-11-28 08:43:24 by Bruce-li
Well, this thread has pretty much changed from a thread about environment to one about graphics hardware so I'll be quick then everyone can get back to arguing about the virtues of OpenGL on SGs. My objection to the "original" question has nothing to do with 64 bit, I welcome that as a natural evolution in processor design. As I said when a 486DX66 was thought to be near the limits of the technology they changed technology. The same applies to 32 bit, people want more speed, they want their movies, their games that look like they were filmed and not generated and they always want what they percieve is the top in technology. My problem is with the structure of LongHorn and the proposed new OSes by Microsoft. When the fastest way to access the API is a scripting language there is no advantage to assembly, you can no longer just call an api function, you have to generate a script that will call it for you. True that assembly today is little more than a group of instructions written to script C functions but at least they can be called directly without translation.
Posted on 2003-11-28 09:22:46 by donkey

When the fastest way to access the API is a scripting language

Hrm, that sounds like bull to me... if it's .NET you're referring to, it's more than just a scripting language. Besides, you have the option to call native code where the performance is necessary.
Posted on 2003-11-28 09:26:22 by f0dder
It may well be bull but not maliciously offered, I have no intention to decieve, I have read quite a bit about LongHorn but not enough to say that I fully understand what it will be like. I had understood that the new direction of OSes by Microsoft was to have the OS run by an engine with the OS written in a scripting language like XUML. The API would be replaced by a grouping of XUML scripts that would be run in the engine. There is no option in that type of system to call the functions directly, that is like calling the functions in VBScript directly, you can have the engine execute a script component but not execute the function directly.
Posted on 2003-11-28 09:31:48 by donkey
.NET is not a scripting language, it's a framework, and part of that framework is a JIT-compiler.
Ironically, this is how graphics hardware already works... You write your shaders in a pseudo-assembly or pseudo-C language, and then it is assembled/compiled to bytecode at compile-time, and at runtime this bytecode is handed to the driver, which will translate it to native code.
For graphics hardware this was a requirement. There is no single dominating architecture, so graphics-code has to be independent from the actual hardware it is running on.
With regular CPUs, we are also reaching a point where x86 has run its course... .NET is probably Microsoft's way to ensure that when the bottom drops out of the x86-empire, the bottom will not drop out of the WIndows-empire.

While in the short run, this may be a small setback, since .NET code might not always run as efficiently as native code, in the long run this should be a blessing in disguise. Firstly, the .NET compilers are still in their infancy, and they shall improve over time, and it has already been proven by the HP Dynamo project, that dynamic compilation can out-perform static compilation, so we may see this in .NET aswell...
And secondly, we will no longer be tied to x86, this will mean that the market will be open to new competitors, with more modern and more efficient designs, and .NET will allow you to take advantage of these immediately.
You will be able to choose the CPU that best suits your requirements, and you will not have to worry about the software support.

Oh and ofcourse the Win32API won't just vanish into thin air when Longhorn is released. Longhorn will still have to be able to run legacy software, so the Win32API may be with us for a few years yet.
Posted on 2003-11-28 09:36:10 by Bruce-li
Hm, can't say I've read much about longhorn, but that sounds way out there :) - I doubt something like this will be done for some time yet - and if so, I'll probably be done with computers. The idea to have normal apps user interfaces in some scripting language sonuds nice, though... there'd be a lot less to bother about, and one could focus on the important things. As long as this was done without too much of a speed impact, of course...
Posted on 2003-11-28 09:37:16 by f0dder
Well, the way I understood it the OS will be structured with three layers on top of the raw base services. Each of the three layers will essentially be a scripting engine. Avalon for the user interface, WinFS is the new file system, and Indigo the new communications system. When you want to build an application you use a set of scripts and the engines execute the scripts. This meets the requirement of being cross platform, something that MS has sought since it dropped Alpha support and chained itself to the x86 family. I think the decision of Intel to make a relatively clean break from the x86 when designing the 64 bit family was a wake up call for MS that they would need to go the route of platform independance. In the real world there is no way to do this other than a JIT compiler like Java and that is where I understood LongHorn to be headed, .NET has already given us a sneak peek at this - the next step is a foregone conclusion. If you need a preview of what asm will look like just add Common Controls 6 to an application, you use a script that's independant of your assembly program.
Posted on 2003-11-28 10:12:43 by donkey